Dec 04 09:18:52 crc systemd[1]: Starting Kubernetes Kubelet... Dec 04 09:18:52 crc restorecon[4707]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:52 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 09:18:53 crc restorecon[4707]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 04 09:18:53 crc kubenswrapper[4841]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.449632 4841 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456101 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456137 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456142 4841 feature_gate.go:330] unrecognized feature gate: Example Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456149 4841 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456156 4841 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456161 4841 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456167 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456172 4841 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456178 4841 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456184 4841 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456190 4841 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456196 4841 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456202 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456207 4841 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456212 4841 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456217 4841 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456221 4841 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456227 4841 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456231 4841 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456235 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456239 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456243 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456247 4841 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456252 4841 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456256 4841 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456260 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456266 4841 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456271 4841 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456276 4841 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456280 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456284 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456289 4841 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456294 4841 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456319 4841 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456326 4841 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456331 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456336 4841 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456341 4841 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456346 4841 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456350 4841 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456355 4841 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456360 4841 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456365 4841 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456371 4841 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456375 4841 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456380 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456384 4841 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456389 4841 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456394 4841 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456399 4841 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456403 4841 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456407 4841 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456411 4841 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456416 4841 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456420 4841 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456424 4841 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456428 4841 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456432 4841 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456436 4841 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456442 4841 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456446 4841 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456450 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456454 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456459 4841 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456465 4841 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456471 4841 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456476 4841 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456482 4841 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456488 4841 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456493 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.456498 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456614 4841 flags.go:64] FLAG: --address="0.0.0.0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456628 4841 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456637 4841 flags.go:64] FLAG: --anonymous-auth="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456643 4841 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456650 4841 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456656 4841 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456663 4841 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456670 4841 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456675 4841 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456680 4841 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456686 4841 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456700 4841 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456705 4841 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456710 4841 flags.go:64] FLAG: --cgroup-root="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456715 4841 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456720 4841 flags.go:64] FLAG: --client-ca-file="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456725 4841 flags.go:64] FLAG: --cloud-config="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456730 4841 flags.go:64] FLAG: --cloud-provider="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456734 4841 flags.go:64] FLAG: --cluster-dns="[]" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456741 4841 flags.go:64] FLAG: --cluster-domain="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456746 4841 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456751 4841 flags.go:64] FLAG: --config-dir="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456756 4841 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456788 4841 flags.go:64] FLAG: --container-log-max-files="5" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456800 4841 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456805 4841 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456809 4841 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456815 4841 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456820 4841 flags.go:64] FLAG: --contention-profiling="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456824 4841 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456829 4841 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456835 4841 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456840 4841 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456850 4841 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456855 4841 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456861 4841 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456868 4841 flags.go:64] FLAG: --enable-load-reader="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456874 4841 flags.go:64] FLAG: --enable-server="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456880 4841 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456898 4841 flags.go:64] FLAG: --event-burst="100" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456905 4841 flags.go:64] FLAG: --event-qps="50" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456910 4841 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456915 4841 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456920 4841 flags.go:64] FLAG: --eviction-hard="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456928 4841 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456933 4841 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456939 4841 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456953 4841 flags.go:64] FLAG: --eviction-soft="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456959 4841 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456965 4841 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456970 4841 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456975 4841 flags.go:64] FLAG: --experimental-mounter-path="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456981 4841 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456986 4841 flags.go:64] FLAG: --fail-swap-on="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456991 4841 flags.go:64] FLAG: --feature-gates="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.456998 4841 flags.go:64] FLAG: --file-check-frequency="20s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457004 4841 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457009 4841 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457014 4841 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457020 4841 flags.go:64] FLAG: --healthz-port="10248" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457025 4841 flags.go:64] FLAG: --help="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457030 4841 flags.go:64] FLAG: --hostname-override="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457035 4841 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457040 4841 flags.go:64] FLAG: --http-check-frequency="20s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457046 4841 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457051 4841 flags.go:64] FLAG: --image-credential-provider-config="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457056 4841 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457061 4841 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457067 4841 flags.go:64] FLAG: --image-service-endpoint="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457073 4841 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457079 4841 flags.go:64] FLAG: --kube-api-burst="100" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457085 4841 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457091 4841 flags.go:64] FLAG: --kube-api-qps="50" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457096 4841 flags.go:64] FLAG: --kube-reserved="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457101 4841 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457106 4841 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457111 4841 flags.go:64] FLAG: --kubelet-cgroups="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457116 4841 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457121 4841 flags.go:64] FLAG: --lock-file="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457126 4841 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457131 4841 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457136 4841 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457145 4841 flags.go:64] FLAG: --log-json-split-stream="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457159 4841 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457164 4841 flags.go:64] FLAG: --log-text-split-stream="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457169 4841 flags.go:64] FLAG: --logging-format="text" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457174 4841 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457180 4841 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457185 4841 flags.go:64] FLAG: --manifest-url="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457204 4841 flags.go:64] FLAG: --manifest-url-header="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457212 4841 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457217 4841 flags.go:64] FLAG: --max-open-files="1000000" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457224 4841 flags.go:64] FLAG: --max-pods="110" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457229 4841 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457234 4841 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457240 4841 flags.go:64] FLAG: --memory-manager-policy="None" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457245 4841 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457251 4841 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457256 4841 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457261 4841 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457274 4841 flags.go:64] FLAG: --node-status-max-images="50" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457280 4841 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457285 4841 flags.go:64] FLAG: --oom-score-adj="-999" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457291 4841 flags.go:64] FLAG: --pod-cidr="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457295 4841 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457305 4841 flags.go:64] FLAG: --pod-manifest-path="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457310 4841 flags.go:64] FLAG: --pod-max-pids="-1" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457315 4841 flags.go:64] FLAG: --pods-per-core="0" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457320 4841 flags.go:64] FLAG: --port="10250" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457325 4841 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457331 4841 flags.go:64] FLAG: --provider-id="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457336 4841 flags.go:64] FLAG: --qos-reserved="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457341 4841 flags.go:64] FLAG: --read-only-port="10255" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457346 4841 flags.go:64] FLAG: --register-node="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457351 4841 flags.go:64] FLAG: --register-schedulable="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457356 4841 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457371 4841 flags.go:64] FLAG: --registry-burst="10" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457376 4841 flags.go:64] FLAG: --registry-qps="5" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457382 4841 flags.go:64] FLAG: --reserved-cpus="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457395 4841 flags.go:64] FLAG: --reserved-memory="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457402 4841 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457408 4841 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457413 4841 flags.go:64] FLAG: --rotate-certificates="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457418 4841 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457423 4841 flags.go:64] FLAG: --runonce="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457429 4841 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457434 4841 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457440 4841 flags.go:64] FLAG: --seccomp-default="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457445 4841 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457450 4841 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457456 4841 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457461 4841 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457467 4841 flags.go:64] FLAG: --storage-driver-password="root" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457472 4841 flags.go:64] FLAG: --storage-driver-secure="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457477 4841 flags.go:64] FLAG: --storage-driver-table="stats" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457483 4841 flags.go:64] FLAG: --storage-driver-user="root" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457488 4841 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457493 4841 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457499 4841 flags.go:64] FLAG: --system-cgroups="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457504 4841 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457513 4841 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457518 4841 flags.go:64] FLAG: --tls-cert-file="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457523 4841 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457533 4841 flags.go:64] FLAG: --tls-min-version="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457539 4841 flags.go:64] FLAG: --tls-private-key-file="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457544 4841 flags.go:64] FLAG: --topology-manager-policy="none" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457549 4841 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457555 4841 flags.go:64] FLAG: --topology-manager-scope="container" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457560 4841 flags.go:64] FLAG: --v="2" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457567 4841 flags.go:64] FLAG: --version="false" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457573 4841 flags.go:64] FLAG: --vmodule="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457580 4841 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.457586 4841 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457743 4841 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457751 4841 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457786 4841 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457792 4841 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457797 4841 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457804 4841 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457810 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457817 4841 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457822 4841 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457826 4841 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457831 4841 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457835 4841 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457845 4841 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457850 4841 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457854 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457859 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457864 4841 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457868 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457873 4841 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457877 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457882 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457886 4841 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457891 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457898 4841 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457902 4841 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457906 4841 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457910 4841 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457915 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457919 4841 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457923 4841 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457928 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457933 4841 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457937 4841 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457942 4841 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457946 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457950 4841 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457955 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457959 4841 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457974 4841 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457979 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457983 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457988 4841 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457992 4841 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.457998 4841 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458006 4841 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458011 4841 feature_gate.go:330] unrecognized feature gate: Example Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458017 4841 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458021 4841 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458028 4841 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458033 4841 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458038 4841 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458044 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458049 4841 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458054 4841 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458059 4841 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458066 4841 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458072 4841 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458076 4841 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458081 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458087 4841 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458093 4841 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458098 4841 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458103 4841 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458108 4841 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458113 4841 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458125 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458130 4841 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458136 4841 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458141 4841 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458147 4841 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.458152 4841 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.458169 4841 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.469278 4841 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.469333 4841 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469491 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469506 4841 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469515 4841 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469526 4841 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469536 4841 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469545 4841 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469554 4841 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469563 4841 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469572 4841 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469581 4841 feature_gate.go:330] unrecognized feature gate: Example Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469590 4841 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469598 4841 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469606 4841 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469615 4841 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469624 4841 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469632 4841 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469640 4841 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469648 4841 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469660 4841 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469673 4841 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469684 4841 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469695 4841 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469706 4841 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469717 4841 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469726 4841 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469737 4841 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469748 4841 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469790 4841 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469800 4841 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469809 4841 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469818 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469827 4841 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469837 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469845 4841 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469856 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469864 4841 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469872 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469881 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469888 4841 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469897 4841 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469905 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469914 4841 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469922 4841 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469930 4841 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469938 4841 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469947 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469955 4841 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469963 4841 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469971 4841 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469979 4841 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469988 4841 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.469996 4841 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470004 4841 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470012 4841 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470020 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470028 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470036 4841 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470044 4841 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470052 4841 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470061 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470069 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470078 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470086 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470095 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470103 4841 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470112 4841 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470120 4841 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470128 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470137 4841 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470146 4841 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470156 4841 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.470171 4841 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470416 4841 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470427 4841 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470436 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470446 4841 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470455 4841 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470464 4841 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470472 4841 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470481 4841 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470489 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470497 4841 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470505 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470513 4841 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470520 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470528 4841 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470536 4841 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470544 4841 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470552 4841 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470559 4841 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470567 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470575 4841 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470582 4841 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470590 4841 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470598 4841 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470608 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470616 4841 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470624 4841 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470632 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470640 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470648 4841 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470656 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470665 4841 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470673 4841 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470683 4841 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470693 4841 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470703 4841 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470711 4841 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470720 4841 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470728 4841 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470737 4841 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470745 4841 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470754 4841 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470789 4841 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470798 4841 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470806 4841 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470815 4841 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470823 4841 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470831 4841 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470839 4841 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470847 4841 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470855 4841 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470863 4841 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470873 4841 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470882 4841 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470890 4841 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470900 4841 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470912 4841 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470920 4841 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470929 4841 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470939 4841 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470948 4841 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470956 4841 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470966 4841 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470974 4841 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470983 4841 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470991 4841 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.470999 4841 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.471029 4841 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.471038 4841 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.471047 4841 feature_gate.go:330] unrecognized feature gate: Example Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.471056 4841 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.471068 4841 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.471085 4841 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.471431 4841 server.go:940] "Client rotation is on, will bootstrap in background" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.476645 4841 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.476817 4841 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.477630 4841 server.go:997] "Starting client certificate rotation" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.477658 4841 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.477982 4841 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-07 22:43:46.763429923 +0000 UTC Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.478183 4841 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 85h24m53.285250739s for next certificate rotation Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.490165 4841 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.494052 4841 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.503887 4841 log.go:25] "Validated CRI v1 runtime API" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.521712 4841 log.go:25] "Validated CRI v1 image API" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.523395 4841 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.527875 4841 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-04-09-13-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.527914 4841 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.546479 4841 manager.go:217] Machine: {Timestamp:2025-12-04 09:18:53.544344566 +0000 UTC m=+0.296134800 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a73eb056-92c5-4c06-b0de-ae9beb3011d0 BootID:0ccf9c40-084c-4a44-9660-424570094b73 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bc:49:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bc:49:30 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4a:76:32 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cb:ee:1f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:dd:1e:37 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c0:bd:cb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:ac:3f:8b:83:04 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:ce:dd:4d:bd:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.546806 4841 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.547035 4841 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.547788 4841 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.547988 4841 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548029 4841 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548290 4841 topology_manager.go:138] "Creating topology manager with none policy" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548301 4841 container_manager_linux.go:303] "Creating device plugin manager" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548494 4841 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548530 4841 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548752 4841 state_mem.go:36] "Initialized new in-memory state store" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.548861 4841 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.549462 4841 kubelet.go:418] "Attempting to sync node with API server" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.549487 4841 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.549529 4841 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.549547 4841 kubelet.go:324] "Adding apiserver pod source" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.549561 4841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.551557 4841 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.552010 4841 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.552017 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.552048 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.552114 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.552122 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.553459 4841 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554185 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554226 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554241 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554255 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554279 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554292 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554313 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554335 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554350 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554363 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554401 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554415 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.554996 4841 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.555572 4841 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.555731 4841 server.go:1280] "Started kubelet" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.556089 4841 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.556601 4841 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.557847 4841 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 04 09:18:53 crc systemd[1]: Started Kubernetes Kubelet. Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.557811 4841 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187df891ad0231de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 09:18:53.555651038 +0000 UTC m=+0.307441322,LastTimestamp:2025-12-04 09:18:53.555651038 +0000 UTC m=+0.307441322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.558985 4841 server.go:460] "Adding debug handlers to kubelet server" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.559670 4841 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.559740 4841 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.560048 4841 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:35:06.782072893 +0000 UTC Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.560089 4841 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 973h16m13.22198699s for next certificate rotation Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.560161 4841 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.560242 4841 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.560258 4841 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.561449 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="200ms" Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.562737 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.562906 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.563323 4841 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.567949 4841 factory.go:55] Registering systemd factory Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.567988 4841 factory.go:221] Registration of the systemd container factory successfully Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.568427 4841 factory.go:153] Registering CRI-O factory Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.568458 4841 factory.go:221] Registration of the crio container factory successfully Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.568533 4841 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.568570 4841 factory.go:103] Registering Raw factory Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.568594 4841 manager.go:1196] Started watching for new ooms in manager Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.569292 4841 manager.go:319] Starting recovery of all containers Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571199 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571262 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571283 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571303 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571322 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571342 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571363 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571383 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571405 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571433 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571452 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571478 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571496 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571519 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571570 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571588 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571606 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571625 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571645 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571665 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571682 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571709 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571726 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571743 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571804 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571892 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571919 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571938 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571962 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.571982 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572002 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572028 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572046 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572066 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572083 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572102 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572119 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572137 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572156 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572173 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572191 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572243 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572261 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572282 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572298 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572317 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572336 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572355 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572414 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572437 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572455 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572474 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572500 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572523 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572543 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572562 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572581 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572599 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572618 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572635 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572656 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572674 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572696 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572715 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572733 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572753 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572823 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572843 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572861 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572879 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572896 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572914 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572932 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572949 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572966 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.572982 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573002 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573022 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573044 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573063 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573082 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573100 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573118 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573136 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573152 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573169 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573190 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573210 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573227 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573245 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573262 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573279 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573299 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573315 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573339 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573357 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573379 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573396 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573414 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573431 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573511 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573529 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573547 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573566 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573590 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573622 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573659 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573682 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573706 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573732 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573753 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573809 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573837 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573864 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573883 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573902 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573928 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573946 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573965 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573983 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.573999 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574016 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574032 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574050 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574068 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574085 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574112 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574131 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574154 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574172 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574234 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574255 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574273 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574289 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574308 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574326 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574345 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574372 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574400 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574419 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574437 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574454 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574485 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574505 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574524 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574541 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574560 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574578 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574596 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574616 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574634 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574653 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574671 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574689 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574706 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574723 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574742 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574783 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574805 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574823 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574843 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574863 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574882 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574901 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574948 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574967 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.574984 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.575004 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.575023 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.575041 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.575060 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.575080 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578516 4841 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578574 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578596 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578612 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578628 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578643 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578658 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578673 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578689 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578706 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578722 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578743 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578778 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578796 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578813 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578882 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578901 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578954 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578978 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.578996 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579011 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579028 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579044 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579060 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579079 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579096 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579114 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579131 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579148 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579165 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579183 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579199 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579216 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579235 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579254 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579274 4841 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579288 4841 reconstruct.go:97] "Volume reconstruction finished" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.579296 4841 reconciler.go:26] "Reconciler: start to sync state" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.586505 4841 manager.go:324] Recovery completed Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.595117 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.597439 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.597484 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.597496 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.598507 4841 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.598518 4841 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.598542 4841 state_mem.go:36] "Initialized new in-memory state store" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.610209 4841 policy_none.go:49] "None policy: Start" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.611815 4841 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.611858 4841 state_mem.go:35] "Initializing new in-memory state store" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.611899 4841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.615367 4841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.615441 4841 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.615486 4841 kubelet.go:2335] "Starting kubelet main sync loop" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.615551 4841 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 04 09:18:53 crc kubenswrapper[4841]: W1204 09:18:53.620125 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.620219 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.660373 4841 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.687968 4841 manager.go:334] "Starting Device Plugin manager" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.688026 4841 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.688039 4841 server.go:79] "Starting device plugin registration server" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.688652 4841 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.688671 4841 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.688933 4841 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.689038 4841 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.689052 4841 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.696191 4841 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.716253 4841 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.716344 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.717929 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.718013 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.718029 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.718363 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.718541 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.718581 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.719464 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.719486 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.719496 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.719961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.720029 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.720050 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.720412 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.720453 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.720491 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721402 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721417 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.721934 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722079 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722261 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722329 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722676 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722688 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722848 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722945 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.722977 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723344 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723372 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723383 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723488 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723507 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723518 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723644 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.723669 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724514 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724525 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724634 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724687 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.724701 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.763425 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="400ms" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.781738 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782038 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782157 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782329 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782498 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782570 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782613 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782642 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782666 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782799 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782876 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782929 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782957 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.782983 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.783029 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.788961 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.790678 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.790736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.790750 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.790793 4841 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.791341 4841 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884399 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884533 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884620 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884571 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884688 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884625 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884724 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884744 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884791 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884854 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884877 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884872 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884962 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.884974 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885008 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885053 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885119 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885125 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885181 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885233 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885267 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885303 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885308 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885369 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885371 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885412 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885436 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885433 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885522 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.885574 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.991470 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.992952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.993008 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.993028 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:53 crc kubenswrapper[4841]: I1204 09:18:53.993069 4841 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 09:18:53 crc kubenswrapper[4841]: E1204 09:18:53.993689 4841 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.042894 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.065421 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 09:18:54 crc kubenswrapper[4841]: W1204 09:18:54.073084 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-30a35247ea6dab294e3e62fe927f3ab473b46ad4f2a1bb2d7f7a3610caf49485 WatchSource:0}: Error finding container 30a35247ea6dab294e3e62fe927f3ab473b46ad4f2a1bb2d7f7a3610caf49485: Status 404 returned error can't find the container with id 30a35247ea6dab294e3e62fe927f3ab473b46ad4f2a1bb2d7f7a3610caf49485 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.077720 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:54 crc kubenswrapper[4841]: W1204 09:18:54.098576 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-16d25563a150853f51ce4bb0e767df81a3012427ff68c2e722eabd1d418a42c5 WatchSource:0}: Error finding container 16d25563a150853f51ce4bb0e767df81a3012427ff68c2e722eabd1d418a42c5: Status 404 returned error can't find the container with id 16d25563a150853f51ce4bb0e767df81a3012427ff68c2e722eabd1d418a42c5 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.115808 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.127358 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:18:54 crc kubenswrapper[4841]: W1204 09:18:54.152314 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e996c0df4fa086e5c1c47be172168ed235132d88628dececbfa572b3c7660a4a WatchSource:0}: Error finding container e996c0df4fa086e5c1c47be172168ed235132d88628dececbfa572b3c7660a4a: Status 404 returned error can't find the container with id e996c0df4fa086e5c1c47be172168ed235132d88628dececbfa572b3c7660a4a Dec 04 09:18:54 crc kubenswrapper[4841]: E1204 09:18:54.165164 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="800ms" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.394699 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.396146 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.396180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.396189 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.396213 4841 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 09:18:54 crc kubenswrapper[4841]: E1204 09:18:54.396626 4841 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Dec 04 09:18:54 crc kubenswrapper[4841]: W1204 09:18:54.502315 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:54 crc kubenswrapper[4841]: E1204 09:18:54.502741 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.557181 4841 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.624430 4841 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3736715dee60ba1531f2cab026a5571b68ac4bfb6b3df9150a5db9d6a74a75cb" exitCode=0 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.624526 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3736715dee60ba1531f2cab026a5571b68ac4bfb6b3df9150a5db9d6a74a75cb"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.624629 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ddf337b7bbbedc304ce9530939c754ea164734c71bc32e0bce304444e4aa1984"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.624753 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.625838 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.625873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.625885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.626088 4841 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8d6400eebac6690e98a84418a3280c68e544841ffa308b8c72af442636f4f8f2" exitCode=0 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.626195 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8d6400eebac6690e98a84418a3280c68e544841ffa308b8c72af442636f4f8f2"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.626263 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"30a35247ea6dab294e3e62fe927f3ab473b46ad4f2a1bb2d7f7a3610caf49485"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.626381 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.627954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.627987 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.627999 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.628589 4841 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915" exitCode=0 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.628644 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.628669 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e996c0df4fa086e5c1c47be172168ed235132d88628dececbfa572b3c7660a4a"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.628779 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.629531 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.629557 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.629570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.631137 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.631186 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1d919c56af9015bd02fdb653a538ee66f1ac2bcdea80e2913823841165a81ff0"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.636444 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b" exitCode=0 Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.636505 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.636550 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16d25563a150853f51ce4bb0e767df81a3012427ff68c2e722eabd1d418a42c5"} Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.636675 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.638482 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.638529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.638545 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.640643 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.641545 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.641609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:54 crc kubenswrapper[4841]: I1204 09:18:54.641625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:54 crc kubenswrapper[4841]: W1204 09:18:54.937432 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:54 crc kubenswrapper[4841]: E1204 09:18:54.937570 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:54 crc kubenswrapper[4841]: E1204 09:18:54.965820 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="1.6s" Dec 04 09:18:55 crc kubenswrapper[4841]: W1204 09:18:55.019339 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:55 crc kubenswrapper[4841]: E1204 09:18:55.019422 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:55 crc kubenswrapper[4841]: W1204 09:18:55.071469 4841 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:55 crc kubenswrapper[4841]: E1204 09:18:55.071581 4841 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.197116 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.198295 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.198337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.198349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.198381 4841 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 09:18:55 crc kubenswrapper[4841]: E1204 09:18:55.198869 4841 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.556513 4841 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.640282 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.640332 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.640352 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.640443 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.641284 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.641307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.641382 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.644288 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.644339 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646225 4841 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c6da78bb4c92b7833c055765a69b54fe78646c2015d90a41af8847db9c543f6e" exitCode=0 Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646276 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c6da78bb4c92b7833c055765a69b54fe78646c2015d90a41af8847db9c543f6e"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646372 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646959 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646982 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.646990 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.649291 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3430f5680887adb2b8ef1f20abd7728e432dd05a37ab32fc9528b3a428edfba9"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.649358 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.649979 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.650004 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.650016 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.652588 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a"} Dec 04 09:18:55 crc kubenswrapper[4841]: I1204 09:18:55.652609 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.328701 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.658115 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.658164 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.658177 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.658235 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.659247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.659281 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.659292 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.664084 4841 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="eb89d5d03ed688768f75e02b647e5413c04653b082fe04f0d401904128160c45" exitCode=0 Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.664210 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"eb89d5d03ed688768f75e02b647e5413c04653b082fe04f0d401904128160c45"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.664446 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.665587 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.665616 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.665628 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.666781 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a"} Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.666832 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.666921 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.667901 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.667951 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.667975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.668178 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.668205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.668216 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.798959 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.800030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.800065 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.800074 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:56 crc kubenswrapper[4841]: I1204 09:18:56.800101 4841 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.021198 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.675067 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.675142 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.675865 4841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.675893 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676017 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0af5e88d9c783a3a1cff7609a60a89529f9ff496d5685b75ea452ba54d0b17ed"} Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676080 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb43ce607b054c6de3b6de0214bbf627f7be00fd58a4d9d9461f6037118508d7"} Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676122 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676151 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4172f431fff8435ca60757a303f9808bfa0276aa8eefcc349a4fbe6f9ef55484"} Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676820 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676852 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676852 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676878 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676901 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676906 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.676886 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:57 crc kubenswrapper[4841]: I1204 09:18:57.907237 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.683751 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.683856 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.683736 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8f9032b9ced7e2319cb32ed67cc3513b4436d19aa13a36613a3516923bb7b4e"} Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.683949 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89120d7e58b50e435c402c5818bfdce313125a5b0c23b29106957a2af0b6dcbd"} Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691259 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691322 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691259 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691421 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:58 crc kubenswrapper[4841]: I1204 09:18:58.691444 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.190142 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.190319 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.191750 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.191801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.191812 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.687197 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.687198 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.689966 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.690149 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.690277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.691794 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.691850 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:18:59 crc kubenswrapper[4841]: I1204 09:18:59.691867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:01 crc kubenswrapper[4841]: I1204 09:19:01.815363 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:19:01 crc kubenswrapper[4841]: I1204 09:19:01.815610 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:01 crc kubenswrapper[4841]: I1204 09:19:01.817299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:01 crc kubenswrapper[4841]: I1204 09:19:01.817346 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:01 crc kubenswrapper[4841]: I1204 09:19:01.817355 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.153975 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.154259 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.155857 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.155925 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.155944 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.539580 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.540199 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.541581 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.541620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:02 crc kubenswrapper[4841]: I1204 09:19:02.541633 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:03 crc kubenswrapper[4841]: E1204 09:19:03.700426 4841 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.462277 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.462480 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.464364 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.464413 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.464430 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.472562 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.703793 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.705520 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.705573 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.705588 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:04 crc kubenswrapper[4841]: I1204 09:19:04.713076 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.154157 4841 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.154273 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.706351 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.707502 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.707545 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:05 crc kubenswrapper[4841]: I1204 09:19:05.707558 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:06 crc kubenswrapper[4841]: I1204 09:19:06.151345 4841 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 04 09:19:06 crc kubenswrapper[4841]: I1204 09:19:06.151417 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 09:19:06 crc kubenswrapper[4841]: I1204 09:19:06.157311 4841 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 04 09:19:06 crc kubenswrapper[4841]: I1204 09:19:06.157410 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.029076 4841 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]log ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]etcd ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/generic-apiserver-start-informers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/priority-and-fairness-filter ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-apiextensions-informers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-apiextensions-controllers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/crd-informer-synced ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-system-namespaces-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 04 09:19:07 crc kubenswrapper[4841]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/bootstrap-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/start-kube-aggregator-informers ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-registration-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-discovery-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]autoregister-completion ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-openapi-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 04 09:19:07 crc kubenswrapper[4841]: livez check failed Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.029174 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.905949 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.906237 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.907884 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.907954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.907969 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:07 crc kubenswrapper[4841]: I1204 09:19:07.941608 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 04 09:19:08 crc kubenswrapper[4841]: I1204 09:19:08.713853 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:08 crc kubenswrapper[4841]: I1204 09:19:08.715641 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:08 crc kubenswrapper[4841]: I1204 09:19:08.715709 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:08 crc kubenswrapper[4841]: I1204 09:19:08.715731 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:08 crc kubenswrapper[4841]: I1204 09:19:08.733024 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.195890 4841 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.195950 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.717163 4841 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.718396 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.718426 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:09 crc kubenswrapper[4841]: I1204 09:19:09.718434 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.156203 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.157876 4841 trace.go:236] Trace[1507201015]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 09:18:56.623) (total time: 14533ms): Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[1507201015]: ---"Objects listed" error: 14533ms (09:19:11.157) Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[1507201015]: [14.533873757s] [14.533873757s] END Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.157905 4841 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.158729 4841 trace.go:236] Trace[382350993]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 09:18:56.603) (total time: 14555ms): Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[382350993]: ---"Objects listed" error: 14555ms (09:19:11.158) Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[382350993]: [14.555504406s] [14.555504406s] END Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.158778 4841 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.164360 4841 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.165352 4841 trace.go:236] Trace[704918015]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 09:18:56.954) (total time: 14210ms): Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[704918015]: ---"Objects listed" error: 14210ms (09:19:11.165) Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[704918015]: [14.21080492s] [14.21080492s] END Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.165384 4841 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.169080 4841 trace.go:236] Trace[270852685]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 09:18:57.167) (total time: 14001ms): Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[270852685]: ---"Objects listed" error: 14001ms (09:19:11.168) Dec 04 09:19:11 crc kubenswrapper[4841]: Trace[270852685]: [14.001929037s] [14.001929037s] END Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.169124 4841 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.180302 4841 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.180556 4841 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.181555 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.181675 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.181743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.181835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.181890 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.203487 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.206880 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.206915 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.206924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.206941 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.206951 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.215414 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.218460 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.218493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.218502 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.218516 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.218533 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.235926 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.238871 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.238904 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.238912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.238925 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.238934 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.246811 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.250907 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.250939 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.250948 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.250962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.250972 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.261490 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.261608 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.263235 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.263270 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.263279 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.263293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.263316 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.365379 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.365448 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.365463 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.365481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.365501 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.468032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.468095 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.468106 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.468121 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.468130 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.559554 4841 apiserver.go:52] "Watching apiserver" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.561557 4841 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.561994 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-d5tkl","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.562353 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.562382 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.562408 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.562571 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.562652 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.563116 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.563189 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.563286 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.563509 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.563556 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.564635 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.564962 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.565544 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.565662 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566418 4841 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566710 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566835 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566888 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566925 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566839 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566854 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.566880 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.568741 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.570342 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.570364 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.570371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.570386 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.570395 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.582976 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.599980 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.609538 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.619749 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.628573 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.634459 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.644630 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.657139 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666370 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666428 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666453 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666481 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666505 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666527 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666548 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666579 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666605 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666631 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666653 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666678 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666698 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666717 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666740 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666782 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666805 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666830 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666851 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666871 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666890 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666911 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666934 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666955 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666969 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.666979 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667114 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667142 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667168 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667189 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667257 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667282 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667308 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667371 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667513 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667595 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667670 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667878 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.667971 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668083 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668168 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668222 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668432 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668579 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668736 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668828 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668822 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.668889 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669239 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669462 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669531 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669658 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669785 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.669864 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670118 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670162 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670370 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670437 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670609 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670897 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.670977 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671153 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671310 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671463 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671510 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671735 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671741 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.671817 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672146 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672248 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672322 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672498 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672709 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672799 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672833 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672869 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672873 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672902 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672939 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.672970 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673001 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673037 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673075 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673109 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673144 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673176 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673207 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673240 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673270 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673318 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673377 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673409 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673440 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673469 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673504 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673534 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673566 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673607 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673639 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673669 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673700 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673732 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673796 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673858 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673899 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673929 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673961 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674004 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674036 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674114 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674152 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674182 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674712 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674830 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674873 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674914 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674961 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675007 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675063 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675111 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675155 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675196 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675234 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675275 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675307 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675338 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675370 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675413 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675447 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675484 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675518 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675551 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675607 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675658 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675702 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675734 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675797 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675831 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675868 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675904 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673007 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673029 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673144 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673182 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673302 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673567 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673579 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673891 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.673964 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.674239 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675004 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675593 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.675945 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676143 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676108 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676200 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676226 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676235 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676273 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676306 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676367 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676390 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676409 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676428 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676440 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676458 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676524 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676577 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676616 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676653 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676689 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676809 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676873 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676936 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676945 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.676992 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677031 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677066 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677094 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677102 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677152 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677243 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677273 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677298 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677290 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677330 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677374 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677401 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677424 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677446 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677468 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677480 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677488 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677526 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677550 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677577 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677601 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677624 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677649 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677673 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677697 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677723 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677730 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677780 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677742 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677808 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677862 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677889 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677916 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677946 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677971 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.677997 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678001 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678026 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678056 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678108 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678136 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678165 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678191 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678229 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678254 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678280 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678307 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678334 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678362 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678389 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678453 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678482 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678512 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678542 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678592 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678617 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678643 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678669 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678702 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678728 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678755 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678797 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678825 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678857 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678881 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678907 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678930 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678955 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.678992 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679017 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679045 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679078 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679104 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679128 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679166 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679200 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679228 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e52051e-dda2-46c1-8026-af8c26dff263-hosts-file\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679254 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679282 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679310 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679332 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679359 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679385 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ll6\" (UniqueName: \"kubernetes.io/projected/2e52051e-dda2-46c1-8026-af8c26dff263-kube-api-access-47ll6\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679412 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679442 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679466 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679491 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679515 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679542 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679567 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679619 4841 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679634 4841 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679648 4841 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679663 4841 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679677 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679691 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679706 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679722 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679736 4841 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679752 4841 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679787 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679816 4841 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679829 4841 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679843 4841 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679856 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679870 4841 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679885 4841 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679898 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679912 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679926 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679947 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679961 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679974 4841 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679987 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.679999 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680010 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680023 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680037 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680049 4841 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680062 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680075 4841 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680088 4841 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680100 4841 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680113 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680126 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680140 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680130 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680156 4841 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680213 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680265 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680301 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680333 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680361 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680392 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680420 4841 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680449 4841 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680479 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680507 4841 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680535 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.680691 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.681217 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.681638 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.681812 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.681877 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.682096 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.682132 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.682737 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.683517 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.684615 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.684946 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.688197 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:12.188167475 +0000 UTC m=+18.939957769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.693971 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.694235 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.694315 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.694916 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.695025 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.695039 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.695077 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.695076 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.696025 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.696298 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.696317 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.696484 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.697031 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.697075 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.697593 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.697752 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.697858 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.698220 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.698281 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.698876 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.701400 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.701705 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.701823 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.702126 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.702829 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.702935 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703065 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703125 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703170 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703906 4841 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703234 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703449 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703601 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.703788 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.704281 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.704654 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.704727 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:12.204692918 +0000 UTC m=+18.956483332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703806 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703862 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.703927 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.704821 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.704094 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.705039 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.705376 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:12.205347714 +0000 UTC m=+18.957138058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705577 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705177 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705184 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705655 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705083 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705802 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705913 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706276 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.705952 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706024 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706127 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706585 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706583 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706668 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706839 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.706946 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.707264 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.709575 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.709595 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.709837 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.710745 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.710865 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.711158 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.711652 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.711677 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.711803 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.712940 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.713752 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.714029 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.714642 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.714785 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.714991 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715022 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715051 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715067 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715085 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715140 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715474 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715614 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715682 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715712 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.715936 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.716193 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.716242 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.716649 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.716986 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.717518 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.719068 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.719319 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.719632 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.720176 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.722371 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.722404 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.722424 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.722497 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:12.222474082 +0000 UTC m=+18.974264276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.723586 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.724354 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.728146 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.729824 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.731712 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87" exitCode=255 Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.731846 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.731997 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.732030 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.732107 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.732387 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.732956 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.733867 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.735457 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.736276 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.737495 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.737515 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.737552 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: E1204 09:19:11.737582 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:12.237561781 +0000 UTC m=+18.989351985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.737199 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.738964 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.738972 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.739347 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.740840 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.741349 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.742175 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.743868 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.743939 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.744451 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.744799 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.744851 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.744887 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.744970 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.745296 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.745419 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.746573 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.747093 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.748553 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.748582 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.749161 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.750268 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.750450 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.751113 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.751133 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.751359 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.752331 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.752401 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.752877 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753162 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753540 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753608 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753677 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753707 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753731 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.753992 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.754434 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.755466 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.762000 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.762551 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.771955 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.777252 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.778948 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.778989 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.779003 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.779020 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.779032 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.779270 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.780556 4841 scope.go:117] "RemoveContainer" containerID="7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.780610 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.780973 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e52051e-dda2-46c1-8026-af8c26dff263-hosts-file\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781029 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781076 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781125 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47ll6\" (UniqueName: \"kubernetes.io/projected/2e52051e-dda2-46c1-8026-af8c26dff263-kube-api-access-47ll6\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781201 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781229 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781260 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781277 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781290 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781304 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781316 4841 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781329 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781341 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781353 4841 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781364 4841 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781376 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781389 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781401 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781414 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781427 4841 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781439 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781452 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781464 4841 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781475 4841 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781489 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781501 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781503 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781512 4841 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781567 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781578 4841 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781591 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781613 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781708 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781729 4841 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781748 4841 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781778 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781788 4841 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781797 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781807 4841 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781816 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781826 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781834 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781846 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781857 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781869 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781879 4841 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781889 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781888 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e52051e-dda2-46c1-8026-af8c26dff263-hosts-file\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781916 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781904 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781951 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781960 4841 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781981 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781852 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.781993 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782005 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782037 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782047 4841 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782057 4841 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782066 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782077 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782089 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782101 4841 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782112 4841 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782123 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782138 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782150 4841 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782161 4841 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782174 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782194 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782208 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782232 4841 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782242 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782254 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782265 4841 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782274 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782287 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782300 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782312 4841 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782328 4841 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782338 4841 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782347 4841 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782357 4841 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782367 4841 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782379 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782396 4841 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782409 4841 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782422 4841 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782431 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782442 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782452 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782464 4841 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782476 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782488 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782500 4841 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782514 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782528 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782539 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782549 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782560 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782573 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782585 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782597 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782609 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782621 4841 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782634 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782645 4841 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782918 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782941 4841 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782952 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782962 4841 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782971 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782981 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.782991 4841 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783003 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783012 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783021 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783031 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783040 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783049 4841 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783059 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783068 4841 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783077 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783086 4841 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783094 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783102 4841 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783111 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783121 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783130 4841 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.783139 4841 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784146 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784158 4841 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784167 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784176 4841 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784186 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784195 4841 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784204 4841 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784213 4841 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784223 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784254 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784263 4841 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784272 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784281 4841 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784306 4841 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784315 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784323 4841 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784369 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784378 4841 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784387 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784396 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784404 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784413 4841 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784421 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.784429 4841 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.789871 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.797869 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47ll6\" (UniqueName: \"kubernetes.io/projected/2e52051e-dda2-46c1-8026-af8c26dff263-kube-api-access-47ll6\") pod \"node-resolver-d5tkl\" (UID: \"2e52051e-dda2-46c1-8026-af8c26dff263\") " pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.800351 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.809825 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.878143 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.882415 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.882440 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.882450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.882463 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.882473 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.885870 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.894863 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 09:19:11 crc kubenswrapper[4841]: W1204 09:19:11.894971 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-87d9f6751a0a16ffba4c746bcca3d998c60d01d8e0d59f4a7009a71631832d6e WatchSource:0}: Error finding container 87d9f6751a0a16ffba4c746bcca3d998c60d01d8e0d59f4a7009a71631832d6e: Status 404 returned error can't find the container with id 87d9f6751a0a16ffba4c746bcca3d998c60d01d8e0d59f4a7009a71631832d6e Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.900647 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d5tkl" Dec 04 09:19:11 crc kubenswrapper[4841]: W1204 09:19:11.900663 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e1e88311944e06fdae6c4c9dbeb701f9f132335d7174ab0848ffb129a7b25b07 WatchSource:0}: Error finding container e1e88311944e06fdae6c4c9dbeb701f9f132335d7174ab0848ffb129a7b25b07: Status 404 returned error can't find the container with id e1e88311944e06fdae6c4c9dbeb701f9f132335d7174ab0848ffb129a7b25b07 Dec 04 09:19:11 crc kubenswrapper[4841]: W1204 09:19:11.917463 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e52051e_dda2_46c1_8026_af8c26dff263.slice/crio-a0f2addefd2d620f005862ff71c5ae01955d15067b6b31bfc8c58a7c726b5da6 WatchSource:0}: Error finding container a0f2addefd2d620f005862ff71c5ae01955d15067b6b31bfc8c58a7c726b5da6: Status 404 returned error can't find the container with id a0f2addefd2d620f005862ff71c5ae01955d15067b6b31bfc8c58a7c726b5da6 Dec 04 09:19:11 crc kubenswrapper[4841]: W1204 09:19:11.918675 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7ab5f03824debcc6f2246ac9dd42cb9f3cc07f0b16dd7aee6ff498485fe2328e WatchSource:0}: Error finding container 7ab5f03824debcc6f2246ac9dd42cb9f3cc07f0b16dd7aee6ff498485fe2328e: Status 404 returned error can't find the container with id 7ab5f03824debcc6f2246ac9dd42cb9f3cc07f0b16dd7aee6ff498485fe2328e Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.985996 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.986555 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.986629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.986655 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:11 crc kubenswrapper[4841]: I1204 09:19:11.986671 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:11Z","lastTransitionTime":"2025-12-04T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.034892 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.049667 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.069058 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.082265 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.097024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.097048 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.097055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.097071 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.097097 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.098680 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.114141 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.126262 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.135573 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.148013 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.158292 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.166026 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.175007 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.188394 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.188491 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:13.188471745 +0000 UTC m=+19.940261949 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.188410 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199339 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199374 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199384 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199397 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199407 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.199999 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.209332 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.216251 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.222669 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.238360 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.247109 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.258215 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.271192 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.280964 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.289242 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.289455 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289394 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.289522 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289646 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289706 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:13.289561674 +0000 UTC m=+20.041351878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289779 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:13.289723868 +0000 UTC m=+20.041514072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.289829 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289920 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289944 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289956 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289986 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:13.289976654 +0000 UTC m=+20.041766858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.289920 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.290007 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.290016 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.290042 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:13.290033405 +0000 UTC m=+20.041823609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.296646 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.300920 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.300955 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.300967 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.301001 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.301014 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.311482 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.339578 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.355903 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.366346 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.377502 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.387840 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.403630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.403856 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.403946 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.404035 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.404114 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.506794 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.507059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.507132 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.507214 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.507273 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.610208 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.610432 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.610785 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.610868 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.610924 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.616445 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.616576 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.713736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.713789 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.713798 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.713812 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.713822 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.739791 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d5tkl" event={"ID":"2e52051e-dda2-46c1-8026-af8c26dff263","Type":"ContainerStarted","Data":"1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.739837 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d5tkl" event={"ID":"2e52051e-dda2-46c1-8026-af8c26dff263","Type":"ContainerStarted","Data":"a0f2addefd2d620f005862ff71c5ae01955d15067b6b31bfc8c58a7c726b5da6"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.741334 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ab5f03824debcc6f2246ac9dd42cb9f3cc07f0b16dd7aee6ff498485fe2328e"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.743311 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.743374 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.743389 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e1e88311944e06fdae6c4c9dbeb701f9f132335d7174ab0848ffb129a7b25b07"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.744719 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.744789 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"87d9f6751a0a16ffba4c746bcca3d998c60d01d8e0d59f4a7009a71631832d6e"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.748660 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.751279 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.751521 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.756776 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: E1204 09:19:12.773341 4841 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.777155 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.792234 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rxw4w"] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.792596 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.801441 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.801751 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.801916 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.802305 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hhkwl"] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.806077 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.806189 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.807029 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-76xdk"] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.807546 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2lx6q"] Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.807977 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.807988 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.810571 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811116 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811427 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811582 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811683 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811820 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811915 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.811934 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.812019 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.812856 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.813038 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.813262 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.815875 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.815897 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.815905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.815916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.815925 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.824535 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.824803 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.824913 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.825081 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.838235 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.851685 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906106 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cni-binary-copy\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906140 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-k8s-cni-cncf-io\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906160 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-hostroot\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906178 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906197 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-os-release\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906216 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906238 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906259 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-os-release\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906278 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906353 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906470 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906536 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-multus\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906570 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-etc-kubernetes\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906596 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd240e-976c-408f-9ace-3cd860da98e4-rootfs\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906622 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906641 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906662 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906704 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbqn8\" (UniqueName: \"kubernetes.io/projected/86bfe6c3-d06e-40b1-9801-74abeb07ae15-kube-api-access-rbqn8\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906723 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906743 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-conf-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906802 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906823 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906848 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-cnibin\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906868 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906884 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nlx\" (UniqueName: \"kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906934 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhzs\" (UniqueName: \"kubernetes.io/projected/bb1a2623-885c-4232-bdda-ce68122022f5-kube-api-access-kbhzs\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906960 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-kubelet\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906977 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-multus-certs\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.906992 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907011 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cnibin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907025 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-daemon-config\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907043 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-bin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907125 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907220 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907257 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907298 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd240e-976c-408f-9ace-3cd860da98e4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907314 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjcb\" (UniqueName: \"kubernetes.io/projected/5bdd240e-976c-408f-9ace-3cd860da98e4-kube-api-access-5bjcb\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907329 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-netns\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907343 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907358 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907375 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907390 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd240e-976c-408f-9ace-3cd860da98e4-proxy-tls\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907406 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907424 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907460 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907505 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907545 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-system-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.907565 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-socket-dir-parent\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.918504 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.919850 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.919879 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.919890 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.919906 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.919915 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:12Z","lastTransitionTime":"2025-12-04T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.934748 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.951962 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.967068 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.980474 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:12 crc kubenswrapper[4841]: I1204 09:19:12.991398 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.002691 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008646 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-bin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008680 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008700 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008727 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd240e-976c-408f-9ace-3cd860da98e4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008742 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjcb\" (UniqueName: \"kubernetes.io/projected/5bdd240e-976c-408f-9ace-3cd860da98e4-kube-api-access-5bjcb\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008771 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008788 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-netns\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008804 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-bin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008819 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008811 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008803 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008855 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008876 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008892 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008898 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-netns\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008934 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008958 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008981 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.008998 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd240e-976c-408f-9ace-3cd860da98e4-proxy-tls\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009026 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009049 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009064 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-system-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009078 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-socket-dir-parent\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009095 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009111 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009140 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-os-release\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009153 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cni-binary-copy\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009168 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-k8s-cni-cncf-io\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009182 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-hostroot\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009196 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009211 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009236 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009239 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009261 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-multus\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009283 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009285 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-etc-kubernetes\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009307 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-etc-kubernetes\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009311 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd240e-976c-408f-9ace-3cd860da98e4-rootfs\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009329 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-os-release\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009343 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009352 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-os-release\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009359 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009374 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009400 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbqn8\" (UniqueName: \"kubernetes.io/projected/86bfe6c3-d06e-40b1-9801-74abeb07ae15-kube-api-access-rbqn8\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009415 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009428 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009444 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009471 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-conf-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009522 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009542 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009559 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-cnibin\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009575 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009590 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78nlx\" (UniqueName: \"kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009607 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhzs\" (UniqueName: \"kubernetes.io/projected/bb1a2623-885c-4232-bdda-ce68122022f5-kube-api-access-kbhzs\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009623 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-kubelet\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009638 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-multus-certs\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009653 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009885 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cnibin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.009904 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-daemon-config\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010037 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010077 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-k8s-cni-cncf-io\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010106 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-cni-multus\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010120 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-hostroot\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010132 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010120 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010143 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010195 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010167 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5bdd240e-976c-408f-9ace-3cd860da98e4-rootfs\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010204 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-os-release\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010222 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010175 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010257 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010312 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-conf-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010342 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010412 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-system-cni-dir\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010442 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-var-lib-kubelet\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010460 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010510 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-socket-dir-parent\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010558 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-cnibin\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010744 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-host-run-multus-certs\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.010751 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cnibin\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.015575 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.022837 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.022873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.022882 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.022896 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.022906 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.023978 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb1a2623-885c-4232-bdda-ce68122022f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.039448 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.039933 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.039951 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bdd240e-976c-408f-9ace-3cd860da98e4-proxy-tls\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.040729 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.040868 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjcb\" (UniqueName: \"kubernetes.io/projected/5bdd240e-976c-408f-9ace-3cd860da98e4-kube-api-access-5bjcb\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.040909 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-cni-binary-copy\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.040943 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bdd240e-976c-408f-9ace-3cd860da98e4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rxw4w\" (UID: \"5bdd240e-976c-408f-9ace-3cd860da98e4\") " pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.041060 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.042854 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb1a2623-885c-4232-bdda-ce68122022f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.043239 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86bfe6c3-d06e-40b1-9801-74abeb07ae15-multus-daemon-config\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.045828 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhzs\" (UniqueName: \"kubernetes.io/projected/bb1a2623-885c-4232-bdda-ce68122022f5-kube-api-access-kbhzs\") pod \"multus-additional-cni-plugins-2lx6q\" (UID: \"bb1a2623-885c-4232-bdda-ce68122022f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.046359 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.046951 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbqn8\" (UniqueName: \"kubernetes.io/projected/86bfe6c3-d06e-40b1-9801-74abeb07ae15-kube-api-access-rbqn8\") pod \"multus-76xdk\" (UID: \"86bfe6c3-d06e-40b1-9801-74abeb07ae15\") " pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.048698 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.052380 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78nlx\" (UniqueName: \"kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx\") pod \"ovnkube-node-hhkwl\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.061161 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.076460 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.094652 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.106870 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.119125 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.124175 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.124426 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-76xdk" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.126338 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.126371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.126382 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.126397 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.126407 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.129815 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" Dec 04 09:19:13 crc kubenswrapper[4841]: W1204 09:19:13.131662 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc56a9daa_a941_4d89_abd0_b7f0472ee869.slice/crio-e2667ad58bda1502145ffa75b12e794bdefd881b89fc35f87e4ee3db9f0bf6f8 WatchSource:0}: Error finding container e2667ad58bda1502145ffa75b12e794bdefd881b89fc35f87e4ee3db9f0bf6f8: Status 404 returned error can't find the container with id e2667ad58bda1502145ffa75b12e794bdefd881b89fc35f87e4ee3db9f0bf6f8 Dec 04 09:19:13 crc kubenswrapper[4841]: W1204 09:19:13.144044 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86bfe6c3_d06e_40b1_9801_74abeb07ae15.slice/crio-ea1bfa92d224c692a412c1b161661b92ae75730826edda35d39961987808e499 WatchSource:0}: Error finding container ea1bfa92d224c692a412c1b161661b92ae75730826edda35d39961987808e499: Status 404 returned error can't find the container with id ea1bfa92d224c692a412c1b161661b92ae75730826edda35d39961987808e499 Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.145116 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.163367 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.187062 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.204072 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.212035 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.212170 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:15.21214951 +0000 UTC m=+21.963939714 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.222549 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.228692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.228727 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.228737 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.228752 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.228796 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.313683 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.313773 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.313804 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.313845 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314039 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314081 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314113 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:15.31409797 +0000 UTC m=+22.065888164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314149 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:15.314120591 +0000 UTC m=+22.065910795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314172 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314202 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314215 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314265 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:15.314250284 +0000 UTC m=+22.066040488 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314186 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314289 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314303 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.314352 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:15.314342246 +0000 UTC m=+22.066132520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.330835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.330873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.330884 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.330900 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.330912 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.433070 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.433436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.433449 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.433466 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.433478 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.536043 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.536102 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.536111 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.536126 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.536134 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.616538 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.616586 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.616721 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:13 crc kubenswrapper[4841]: E1204 09:19:13.616874 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.621279 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.622172 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.623661 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.624554 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.625985 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.626670 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.627540 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.628917 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.629885 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.631162 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.631842 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.633481 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.634326 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.635067 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.636496 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638511 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638849 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638892 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638922 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.638935 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.640771 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.641118 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.641749 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.642540 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.645080 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.645800 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.647898 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.648514 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.650112 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.650849 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.651681 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.653392 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.654071 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.654357 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.655579 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.656432 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.657582 4841 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.657678 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.659215 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.660117 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.660505 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.661965 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.662632 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.663552 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.664208 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.665324 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.665754 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.666665 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.667322 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.668265 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.668779 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.669700 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.670261 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.671324 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.671822 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.672656 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.672657 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.673159 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.674019 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.674545 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.675076 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.699084 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.713182 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.731424 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.740611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.740641 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.740654 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.740672 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.740682 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.751285 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.755438 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerStarted","Data":"6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.755505 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerStarted","Data":"ea1bfa92d224c692a412c1b161661b92ae75730826edda35d39961987808e499"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.756575 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" exitCode=0 Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.756631 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.756655 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"e2667ad58bda1502145ffa75b12e794bdefd881b89fc35f87e4ee3db9f0bf6f8"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.763982 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65" exitCode=0 Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.764086 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.764119 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerStarted","Data":"7f0561eb6078d8f78d8fa0fb629bb5d049be61760c5856aade3985db1480bb38"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.776991 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.777565 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.777586 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"570f233c66364c4708a34f578db97ee2bb8fb5b32525e25d397dc658a5c1dc2c"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.777911 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.806096 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.844030 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.854962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.855000 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.855009 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.855023 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.855033 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.868068 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.892087 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.913392 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.924748 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.939272 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.957646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.957699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.957711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.957727 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.957739 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:13Z","lastTransitionTime":"2025-12-04T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.967235 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.983591 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:13 crc kubenswrapper[4841]: I1204 09:19:13.998229 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.011477 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.027803 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.044068 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.057304 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.059720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.059789 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.059801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.059815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.059825 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.069850 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.080652 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.092138 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.102277 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.162029 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.162452 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.162465 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.162480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.162493 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.265202 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.265240 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.265251 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.265270 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.265283 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.368521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.368570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.368586 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.368611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.368628 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.470691 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.470727 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.470738 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.470771 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.470784 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.574021 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.574511 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.574524 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.574542 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.574555 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.616392 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:14 crc kubenswrapper[4841]: E1204 09:19:14.616545 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.677609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.677646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.677655 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.677669 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.677680 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.777378 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.778885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.778916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.778929 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.778954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.778966 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782180 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782246 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782267 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782286 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782305 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.782322 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.785081 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076" exitCode=0 Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.785139 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.794541 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.809824 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.830548 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.845267 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.861706 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.874732 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.881325 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.881371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.881385 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.881398 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.881407 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.886201 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.899736 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.913789 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.928152 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.942212 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.955259 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.970969 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.983168 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.983227 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.983247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.983274 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.983293 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:14Z","lastTransitionTime":"2025-12-04T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:14 crc kubenswrapper[4841]: I1204 09:19:14.985959 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:14Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.006393 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.019092 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.035462 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.048101 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.065482 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.077885 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.085591 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.085621 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.085630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.085642 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.085651 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.094460 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.113724 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.130248 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.142641 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.153039 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.166280 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.188603 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.188651 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.188665 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.188683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.188695 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.233491 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.233695 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:19.233673979 +0000 UTC m=+25.985464183 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.291704 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.291810 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.291835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.291868 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.291890 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.334692 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.334840 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.334900 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.334981 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.334914 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335071 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335343 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335372 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335134 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335465 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:19.335427243 +0000 UTC m=+26.087217487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335144 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335474 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335532 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:19.335509505 +0000 UTC m=+26.087299829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335548 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335600 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:19.335548457 +0000 UTC m=+26.087338711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.335638 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:19.335622929 +0000 UTC m=+26.087413173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.395041 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.395110 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.395133 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.395165 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.395188 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.497717 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.497748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.497777 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.497793 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.497803 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.601154 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.601411 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.601537 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.601621 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.601695 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.616708 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.616706 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.616824 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:15 crc kubenswrapper[4841]: E1204 09:19:15.616995 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.704117 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.704169 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.704181 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.704196 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.704207 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.793120 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98" exitCode=0 Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.793177 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.807403 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.807429 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.807436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.807449 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.807458 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.812016 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.828820 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.844602 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.859680 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.875157 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.890960 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.903295 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.909447 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.909479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.909490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.909506 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.909526 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:15Z","lastTransitionTime":"2025-12-04T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.924212 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.937485 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.961061 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:15 crc kubenswrapper[4841]: I1204 09:19:15.984079 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:15Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.011488 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.011525 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.011532 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.011547 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.011556 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.012525 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.027943 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.114451 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.114501 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.114512 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.114546 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.114556 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.217248 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.217285 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.217295 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.217316 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.217327 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.320058 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.320158 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.320181 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.320209 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.320229 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.422835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.423157 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.423166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.423181 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.423194 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.526649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.526699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.526714 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.526734 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.526750 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.615989 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:16 crc kubenswrapper[4841]: E1204 09:19:16.616123 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.629025 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.629102 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.629125 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.629156 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.629184 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.731093 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.731129 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.731138 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.731154 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.731164 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.803913 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88" exitCode=0 Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.803986 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.810817 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.823405 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.833846 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.833878 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.833891 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.833908 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.833920 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.840600 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.854430 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.864891 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.876729 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.889256 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.900627 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.911149 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.927208 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.936557 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.936595 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.936604 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.936620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.936632 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:16Z","lastTransitionTime":"2025-12-04T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.938712 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.951523 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.967426 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:16 crc kubenswrapper[4841]: I1204 09:19:16.982792 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:16Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.038829 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.038857 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.038865 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.038877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.038885 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.142154 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.142229 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.142252 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.142282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.142304 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.245360 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.245434 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.245458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.245490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.245513 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.352546 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.352601 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.352618 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.352644 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.352662 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.456119 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.456177 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.456194 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.456217 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.456235 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.571205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.571284 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.571309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.571348 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.571377 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.616724 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.616743 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:17 crc kubenswrapper[4841]: E1204 09:19:17.616964 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:17 crc kubenswrapper[4841]: E1204 09:19:17.617045 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.676179 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.676244 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.676264 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.676293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.676310 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.706067 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fmcq4"] Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.706531 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.709721 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.709828 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.710184 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.710599 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.731731 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.760962 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.764798 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e112e2-9aab-40e0-bca5-ced078a00cc4-host\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.764905 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0e112e2-9aab-40e0-bca5-ced078a00cc4-serviceca\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.764974 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvfx\" (UniqueName: \"kubernetes.io/projected/f0e112e2-9aab-40e0-bca5-ced078a00cc4-kube-api-access-dfvfx\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.779572 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.779631 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.779652 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.779683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.779703 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.789113 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.829841 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.835938 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e" exitCode=0 Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.836002 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.846995 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.866631 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e112e2-9aab-40e0-bca5-ced078a00cc4-host\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.866728 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0e112e2-9aab-40e0-bca5-ced078a00cc4-serviceca\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.866838 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfvfx\" (UniqueName: \"kubernetes.io/projected/f0e112e2-9aab-40e0-bca5-ced078a00cc4-kube-api-access-dfvfx\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.866901 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e112e2-9aab-40e0-bca5-ced078a00cc4-host\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.873022 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.876925 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0e112e2-9aab-40e0-bca5-ced078a00cc4-serviceca\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.882894 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.883182 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.883463 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.883685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.883971 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.892389 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.910092 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfvfx\" (UniqueName: \"kubernetes.io/projected/f0e112e2-9aab-40e0-bca5-ced078a00cc4-kube-api-access-dfvfx\") pod \"node-ca-fmcq4\" (UID: \"f0e112e2-9aab-40e0-bca5-ced078a00cc4\") " pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.913583 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.927005 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.945662 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.958990 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.973327 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.985623 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.988647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.988692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.988706 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.988724 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:17 crc kubenswrapper[4841]: I1204 09:19:17.988738 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:17Z","lastTransitionTime":"2025-12-04T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.001744 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:17Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.015233 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.030124 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.037728 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fmcq4" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.044527 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.069478 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.089615 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.092427 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.092468 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.092480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.092497 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.092510 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.109639 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.127160 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.144150 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.154794 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.169403 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.186197 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.194470 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.194508 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.194516 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.194531 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.194541 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.200917 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.212740 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.227611 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.301621 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.301662 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.301674 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.301693 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.301705 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.404944 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.405006 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.405018 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.405036 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.405049 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.508900 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.509003 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.509023 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.509083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.509102 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.613406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.613450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.613461 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.613479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.613490 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.616199 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:18 crc kubenswrapper[4841]: E1204 09:19:18.616335 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.716415 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.716466 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.716484 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.716506 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.716522 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.819383 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.819414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.819422 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.819434 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.819445 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.844297 4841 generic.go:334] "Generic (PLEG): container finished" podID="bb1a2623-885c-4232-bdda-ce68122022f5" containerID="9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1" exitCode=0 Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.844359 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerDied","Data":"9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.846230 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fmcq4" event={"ID":"f0e112e2-9aab-40e0-bca5-ced078a00cc4","Type":"ContainerStarted","Data":"303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.846264 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fmcq4" event={"ID":"f0e112e2-9aab-40e0-bca5-ced078a00cc4","Type":"ContainerStarted","Data":"ef0ff97d3d41d5d77c99e63d1d09e32cb4959e421708189d5f9290d7b6ebabb0"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.863248 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.887679 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.913178 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.925091 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.925136 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.925154 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.925177 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.925195 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:18Z","lastTransitionTime":"2025-12-04T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.931927 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.947378 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.963295 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.982197 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:18 crc kubenswrapper[4841]: I1204 09:19:18.995233 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:18Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.008479 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.026352 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.027676 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.027958 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.027982 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.028026 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.028040 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.038565 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.050581 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.065794 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.113059 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.124022 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.131007 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.131200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.131316 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.131429 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.131571 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.140030 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.159001 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.181181 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.200653 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.219598 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.231438 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.234210 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.234256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.234267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.234283 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.234295 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.248394 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.261334 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.274059 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.281307 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.281457 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.281438081 +0000 UTC m=+34.033228285 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.290529 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.305856 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.316174 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.334197 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.338889 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.338916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.338924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.338937 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.338946 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.382142 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.382192 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.382213 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.382229 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382272 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382298 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382337 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382351 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.382328775 +0000 UTC m=+34.134118979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382353 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382366 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.382360186 +0000 UTC m=+34.134150390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382367 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382386 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382396 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382405 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382410 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.382398157 +0000 UTC m=+34.134188361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.382431 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.382422318 +0000 UTC m=+34.134212522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.441570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.441615 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.441629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.441646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.441659 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.543632 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.543675 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.543687 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.543703 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.543715 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.616019 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.616119 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.616232 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:19 crc kubenswrapper[4841]: E1204 09:19:19.616450 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.647385 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.647442 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.647457 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.647480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.647496 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.750933 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.750986 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.751005 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.751027 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.751044 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.855641 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.855712 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.855734 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.855867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.855897 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.863128 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.863882 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.864062 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.871416 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" event={"ID":"bb1a2623-885c-4232-bdda-ce68122022f5","Type":"ContainerStarted","Data":"6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.884957 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.902079 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.903998 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.905962 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.922272 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.942149 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.960388 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.960436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.960448 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.960465 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.960476 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:19Z","lastTransitionTime":"2025-12-04T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.963199 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.983685 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:19 crc kubenswrapper[4841]: I1204 09:19:19.998075 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:19Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.019138 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.032521 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.047586 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.063123 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.063175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.063197 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.063222 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.063238 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.067004 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.083718 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.097337 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.111156 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.124280 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.141614 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.154537 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.170349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.170403 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.170414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.170441 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.170456 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.173891 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.193651 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.213852 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.230574 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.251903 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.264065 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.272577 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.272609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.272621 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.272637 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.272646 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.278718 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.291649 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.311923 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.327605 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.341796 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:20Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.374962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.375024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.375044 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.375067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.375083 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.476943 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.476999 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.477012 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.477028 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.477043 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.579101 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.579172 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.579195 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.579229 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.579256 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.616061 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:20 crc kubenswrapper[4841]: E1204 09:19:20.616234 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.682458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.682507 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.682519 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.682537 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.682554 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.784995 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.785030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.785039 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.785052 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.785061 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.875163 4841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.887781 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.887821 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.887836 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.887857 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.887874 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.991200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.991842 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.992059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.992269 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:20 crc kubenswrapper[4841]: I1204 09:19:20.992471 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:20Z","lastTransitionTime":"2025-12-04T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.095058 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.095092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.095100 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.095113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.095125 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.197602 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.197640 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.197650 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.197665 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.197674 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.300590 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.300625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.300633 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.300646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.300663 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.350554 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.350585 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.350593 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.350605 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.350614 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.360782 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:21Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.363644 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.363876 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.363975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.364079 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.364165 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.378850 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:21Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.383378 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.383437 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.383455 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.383479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.383496 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.399518 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:21Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.402920 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.402951 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.402962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.402977 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.402986 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.417701 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:21Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.420623 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.420649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.420658 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.420672 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.420680 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.438746 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:21Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.438881 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.441146 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.441260 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.441347 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.441432 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.441508 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.545663 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.545705 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.545719 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.545737 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.545750 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.615999 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.616110 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.616175 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:21 crc kubenswrapper[4841]: E1204 09:19:21.616360 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.647831 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.647885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.647902 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.647922 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.647934 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.750420 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.750481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.750498 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.750521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.750539 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.853549 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.853647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.853666 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.853728 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.853831 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.878812 4841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.956372 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.956707 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.956868 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.957023 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:21 crc kubenswrapper[4841]: I1204 09:19:21.957158 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:21Z","lastTransitionTime":"2025-12-04T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.060672 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.060736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.060757 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.060843 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.060867 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.164287 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.164358 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.164371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.164394 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.164411 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.267922 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.267976 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.267992 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.268015 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.268033 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.371109 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.371150 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.371160 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.371175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.371186 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.474181 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.474239 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.474256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.474278 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.474297 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.577608 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.577686 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.577708 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.577757 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.577826 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.615970 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:22 crc kubenswrapper[4841]: E1204 09:19:22.616155 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.680730 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.680828 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.680855 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.680885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.680907 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.784535 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.784628 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.784654 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.784685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.784706 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.885199 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/0.log" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.887381 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.887443 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.887465 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.887494 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.887517 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.889847 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065" exitCode=1 Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.889943 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.891169 4841 scope.go:117] "RemoveContainer" containerID="ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.916149 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:22Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.935797 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:22Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.958192 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:22Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.982684 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:22Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.990184 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.990234 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.990247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.990263 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.990281 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:22Z","lastTransitionTime":"2025-12-04T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:22 crc kubenswrapper[4841]: I1204 09:19:22.999380 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:22Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.022633 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.040154 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.056200 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.077814 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.092912 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.094040 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.094102 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.094118 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.094142 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.094157 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.113945 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.131686 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.144145 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.162971 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.196566 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.196597 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.196605 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.196617 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.196626 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.299387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.299453 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.299471 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.299496 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.299518 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.402859 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.402925 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.402942 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.402966 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.402983 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.506729 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.506797 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.506808 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.506823 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.506833 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.609516 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.609574 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.609591 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.609620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.609639 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.615942 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.615988 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:23 crc kubenswrapper[4841]: E1204 09:19:23.616102 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:23 crc kubenswrapper[4841]: E1204 09:19:23.616170 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.632548 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.647531 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.663743 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.677727 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.690965 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.712358 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.712413 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.712437 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.712469 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.712492 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.714592 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.742354 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.772635 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.785032 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.797350 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.811996 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.817825 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.817858 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.817867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.817882 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.817892 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.825247 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.836824 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.848692 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.895106 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/0.log" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.898231 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.898335 4841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.917407 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.920472 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.920497 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.920506 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.920521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.920533 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:23Z","lastTransitionTime":"2025-12-04T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.931830 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.950434 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.975416 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:23 crc kubenswrapper[4841]: I1204 09:19:23.987389 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.000502 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:23Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.014599 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.022412 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.022466 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.022481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.022500 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.022513 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.039351 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.051717 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.066574 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.121235 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.124499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.124531 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.124542 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.124557 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.124566 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.133955 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.143615 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.158987 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.226846 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.226890 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.226898 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.226912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.226922 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.329291 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.329349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.329370 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.329399 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.329424 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.432916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.433013 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.433040 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.433103 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.433129 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.523408 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.531845 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt"] Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.532428 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.534811 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.534831 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.543395 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.544568 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.544653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.544671 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.544695 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.544714 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.560678 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.582749 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.599450 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.616195 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:24 crc kubenswrapper[4841]: E1204 09:19:24.616370 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.622623 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.639596 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.646728 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.646803 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.646815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.646831 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.646866 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.657395 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.660692 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.660737 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.660801 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-env-overrides\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.660834 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhsl\" (UniqueName: \"kubernetes.io/projected/6afad1dd-cafc-4c83-9e90-b02c61d10486-kube-api-access-cmhsl\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.667914 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.681932 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.693830 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.708627 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.720085 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.733473 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.750309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.750369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.750387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.750414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.750432 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.753361 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.761603 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.761673 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-env-overrides\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.761710 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhsl\" (UniqueName: \"kubernetes.io/projected/6afad1dd-cafc-4c83-9e90-b02c61d10486-kube-api-access-cmhsl\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.761785 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.762528 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.763382 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6afad1dd-cafc-4c83-9e90-b02c61d10486-env-overrides\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.764854 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.773433 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6afad1dd-cafc-4c83-9e90-b02c61d10486-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.783117 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhsl\" (UniqueName: \"kubernetes.io/projected/6afad1dd-cafc-4c83-9e90-b02c61d10486-kube-api-access-cmhsl\") pod \"ovnkube-control-plane-749d76644c-56vgt\" (UID: \"6afad1dd-cafc-4c83-9e90-b02c61d10486\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.783174 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.796264 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.811595 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.826773 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.839999 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.848804 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.852290 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.852367 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.852393 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.852427 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.852449 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.856621 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: W1204 09:19:24.865485 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6afad1dd_cafc_4c83_9e90_b02c61d10486.slice/crio-6c47733a1cb082c0edbbda7d2dc34c9fc96f873790f97bf09c167df316f246ce WatchSource:0}: Error finding container 6c47733a1cb082c0edbbda7d2dc34c9fc96f873790f97bf09c167df316f246ce: Status 404 returned error can't find the container with id 6c47733a1cb082c0edbbda7d2dc34c9fc96f873790f97bf09c167df316f246ce Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.882306 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.902103 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.904980 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" event={"ID":"6afad1dd-cafc-4c83-9e90-b02c61d10486","Type":"ContainerStarted","Data":"6c47733a1cb082c0edbbda7d2dc34c9fc96f873790f97bf09c167df316f246ce"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.906641 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/1.log" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.907187 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/0.log" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.909997 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9" exitCode=1 Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.910036 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.910091 4841 scope.go:117] "RemoveContainer" containerID="ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.910900 4841 scope.go:117] "RemoveContainer" containerID="07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9" Dec 04 09:19:24 crc kubenswrapper[4841]: E1204 09:19:24.911084 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.915177 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.933160 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.956895 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.957655 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.957699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.957710 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.957727 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.957737 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:24Z","lastTransitionTime":"2025-12-04T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.967216 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.978474 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:24 crc kubenswrapper[4841]: I1204 09:19:24.991284 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.001288 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:24Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.010158 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.021860 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.031192 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.040346 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.052127 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.059643 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.059681 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.059690 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.059706 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.059716 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.070394 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.084995 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.097514 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.112892 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.132459 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.146556 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.159305 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.162433 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.162459 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.162533 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.162565 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.162577 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.178634 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.199427 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.265633 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.265676 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.265685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.265699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.265710 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.368912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.368974 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.368992 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.369014 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.369033 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.471065 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.471378 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.471390 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.471408 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.471423 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.574584 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.574676 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.574700 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.574730 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.574753 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.615844 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:25 crc kubenswrapper[4841]: E1204 09:19:25.616041 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.616237 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:25 crc kubenswrapper[4841]: E1204 09:19:25.616379 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.674237 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7t7hn"] Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.675112 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: E1204 09:19:25.675224 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.676867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.676916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.676933 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.676954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.676972 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.696339 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.718016 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.728566 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.737654 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.747264 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.757157 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.765917 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.772563 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb2f\" (UniqueName: \"kubernetes.io/projected/e74f87eb-fb70-4679-93f8-ebe5de564484-kube-api-access-crb2f\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.772598 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.778915 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.778941 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.778952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.778968 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.778979 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.779112 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.799020 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.810121 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.825365 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.840825 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.858870 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.873588 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb2f\" (UniqueName: \"kubernetes.io/projected/e74f87eb-fb70-4679-93f8-ebe5de564484-kube-api-access-crb2f\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.873643 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: E1204 09:19:25.873787 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:25 crc kubenswrapper[4841]: E1204 09:19:25.873842 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:26.37382931 +0000 UTC m=+33.125619514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.874002 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.881612 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.881655 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.881667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.881684 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.881696 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.883621 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.895919 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.899227 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb2f\" (UniqueName: \"kubernetes.io/projected/e74f87eb-fb70-4679-93f8-ebe5de564484-kube-api-access-crb2f\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.915233 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/1.log" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.920055 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" event={"ID":"6afad1dd-cafc-4c83-9e90-b02c61d10486","Type":"ContainerStarted","Data":"b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.920103 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" event={"ID":"6afad1dd-cafc-4c83-9e90-b02c61d10486","Type":"ContainerStarted","Data":"4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.932545 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.945258 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.955980 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.965605 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.980715 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.985713 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.985816 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.985949 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.985986 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.986009 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:25Z","lastTransitionTime":"2025-12-04T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:25 crc kubenswrapper[4841]: I1204 09:19:25.999307 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:25Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.010726 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.024017 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.037285 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.054555 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.078231 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9e2205367d629aef0435dfdb37b2e6d99e9201edd48d59cddfd1b2dab23065\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:22Z\\\",\\\"message\\\":\\\"opping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710666 6191 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710755 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:21.710780 6191 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:21.710825 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 09:19:21.710833 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 09:19:21.710846 6191 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:21.710870 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:21.710885 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 09:19:21.710895 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:21.710850 6191 factory.go:656] Stopping watch factory\\\\nI1204 09:19:21.711380 6191 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.088459 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.088506 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.088521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.088544 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.088558 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.093320 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.105867 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.117483 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.130450 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.142337 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:26Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.190366 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.190416 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.190428 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.190443 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.190458 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.293284 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.293316 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.293324 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.293337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.293346 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.378596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:26 crc kubenswrapper[4841]: E1204 09:19:26.378807 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:26 crc kubenswrapper[4841]: E1204 09:19:26.378911 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:27.378880866 +0000 UTC m=+34.130671100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.398113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.398179 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.398196 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.398219 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.398238 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.501024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.501082 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.501102 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.501127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.501145 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.604460 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.604523 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.604541 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.604563 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.604581 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.616098 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:26 crc kubenswrapper[4841]: E1204 09:19:26.616305 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.707916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.707990 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.708014 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.708043 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.708063 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.810850 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.810905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.810916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.810934 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.810946 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.913613 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.913690 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.913714 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.913743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:26 crc kubenswrapper[4841]: I1204 09:19:26.913806 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:26Z","lastTransitionTime":"2025-12-04T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.017240 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.017296 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.017312 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.017340 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.017356 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.120626 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.120679 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.120697 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.120724 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.120741 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.224307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.224373 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.224390 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.224414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.224436 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.287843 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.288215 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:19:43.288167576 +0000 UTC m=+50.039957910 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.328202 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.328258 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.328276 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.328303 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.328326 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.389052 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.389114 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.389145 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.389189 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.389212 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389281 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389365 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389427 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389445 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389441 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:43.389398909 +0000 UTC m=+50.141189163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389477 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389615 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:43.389585224 +0000 UTC m=+50.141375608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389629 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389636 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:29.389626325 +0000 UTC m=+36.141416529 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389677 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389701 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389721 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389719 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:43.389693356 +0000 UTC m=+50.141483760 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.389805 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:43.389795069 +0000 UTC m=+50.141585273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.430985 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.431032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.431048 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.431068 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.431089 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.534653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.534747 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.534792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.534815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.534832 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.615904 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.615958 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.616144 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.616211 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.616391 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:27 crc kubenswrapper[4841]: E1204 09:19:27.616590 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.638062 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.638112 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.638122 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.638145 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.638157 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.740999 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.741097 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.741117 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.741178 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.741198 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.844111 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.844192 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.844209 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.844266 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.844283 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.947722 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.948105 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.948294 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.948533 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:27 crc kubenswrapper[4841]: I1204 09:19:27.948710 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:27Z","lastTransitionTime":"2025-12-04T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.052150 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.052445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.052649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.052866 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.053022 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.156277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.156366 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.156389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.156412 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.156428 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.258936 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.258992 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.259010 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.259069 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.259088 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.361456 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.361493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.361504 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.361519 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.361530 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.464600 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.464642 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.464660 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.464683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.464700 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.568123 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.568185 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.568207 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.568232 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.568249 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.616677 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:28 crc kubenswrapper[4841]: E1204 09:19:28.616835 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.671177 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.671287 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.671315 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.671345 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.671368 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.774023 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.774081 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.774101 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.774128 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.774151 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.876404 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.876453 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.876464 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.876483 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.876497 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.978292 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.978331 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.978339 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.978352 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:28 crc kubenswrapper[4841]: I1204 09:19:28.978362 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:28Z","lastTransitionTime":"2025-12-04T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.081813 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.081874 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.081891 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.081920 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.081962 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.184663 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.184733 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.184758 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.184816 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.184833 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.288231 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.288305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.288323 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.288347 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.288366 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.391108 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.391164 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.391180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.391205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.391221 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.415474 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.415646 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.416076 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:33.416048713 +0000 UTC m=+40.167838947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.493833 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.493896 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.493912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.493931 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.493948 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.528933 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.530207 4841 scope.go:117] "RemoveContainer" containerID="07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9" Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.530738 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.550283 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.576246 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.596545 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.596825 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.596955 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.597085 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.597199 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.597964 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.613005 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.616196 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.616312 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.616504 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.616703 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.616822 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:29 crc kubenswrapper[4841]: E1204 09:19:29.617016 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.628735 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.643602 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.657663 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.670204 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.684639 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.700416 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.700461 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.700475 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.700493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.700508 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.711117 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.728098 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.749370 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.779534 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.793191 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.803032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.803079 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.803094 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.803113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.803128 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.809561 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.825905 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:29Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.906403 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.906460 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.906481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.906512 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:29 crc kubenswrapper[4841]: I1204 09:19:29.906534 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:29Z","lastTransitionTime":"2025-12-04T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.008482 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.008738 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.008813 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.008873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.008954 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.111865 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.112123 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.112305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.112447 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.112587 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.216030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.216309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.216485 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.216642 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.216802 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.319861 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.319908 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.319924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.319946 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.319964 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.422831 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.422899 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.422919 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.422943 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.422961 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.525128 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.525180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.525198 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.525223 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.525239 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.616078 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:30 crc kubenswrapper[4841]: E1204 09:19:30.616266 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.627942 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.628017 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.628041 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.628073 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.628094 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.731566 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.731644 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.731667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.731697 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.731719 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.834586 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.834664 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.834689 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.834723 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.834746 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.937575 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.937634 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.937653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.937679 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:30 crc kubenswrapper[4841]: I1204 09:19:30.937695 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:30Z","lastTransitionTime":"2025-12-04T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.040657 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.040732 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.040757 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.040821 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.040844 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.143952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.144342 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.144483 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.144607 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.144752 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.247227 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.247303 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.247320 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.247348 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.247365 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.350685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.350815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.350834 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.350860 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.350877 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.454203 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.454262 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.454279 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.454305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.454320 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.515428 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.515500 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.515516 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.515540 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.515556 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.535528 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:31Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.541090 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.541162 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.541186 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.541218 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.541239 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.560378 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:31Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.568679 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.568800 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.568836 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.568870 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.568906 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.586811 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:31Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.592501 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.592561 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.592579 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.592604 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.592621 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.614344 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:31Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.616465 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.616623 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.616857 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.616858 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.617064 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.617203 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.622026 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.622063 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.622075 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.622094 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.622107 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.643408 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:31Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:31 crc kubenswrapper[4841]: E1204 09:19:31.643555 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.645001 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.645037 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.645051 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.645069 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.645084 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.748955 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.748993 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.749001 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.749023 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.749033 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.852816 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.852882 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.852903 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.852934 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.852959 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.955886 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.955933 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.955945 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.955962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:31 crc kubenswrapper[4841]: I1204 09:19:31.955975 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:31Z","lastTransitionTime":"2025-12-04T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.058851 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.058924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.058947 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.058976 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.058996 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.162694 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.162748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.162796 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.162815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.162828 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.266092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.266166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.266186 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.266216 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.266236 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.369647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.369721 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.369745 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.369811 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.369837 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.473827 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.473903 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.473925 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.473966 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.473991 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.576654 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.576699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.576710 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.576728 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.576739 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.617083 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:32 crc kubenswrapper[4841]: E1204 09:19:32.617695 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.679731 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.679837 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.679851 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.679876 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.679892 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.783450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.783524 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.783542 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.783567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.783584 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.886814 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.886880 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.886897 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.886920 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.886939 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.989655 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.989716 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.989736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.989799 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:32 crc kubenswrapper[4841]: I1204 09:19:32.989825 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:32Z","lastTransitionTime":"2025-12-04T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.093753 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.093861 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.093879 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.093905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.093923 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.196756 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.196855 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.196874 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.196896 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.196914 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.299529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.299611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.299630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.299652 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.299669 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.406871 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.407406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.407522 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.407584 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.407629 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.461791 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:33 crc kubenswrapper[4841]: E1204 09:19:33.462045 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:33 crc kubenswrapper[4841]: E1204 09:19:33.462171 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:41.462142655 +0000 UTC m=+48.213932929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.510536 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.510617 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.510629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.510645 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.510657 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.613221 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.613308 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.613328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.613353 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.613371 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.616629 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.616685 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.616637 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:33 crc kubenswrapper[4841]: E1204 09:19:33.616904 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:33 crc kubenswrapper[4841]: E1204 09:19:33.617174 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:33 crc kubenswrapper[4841]: E1204 09:19:33.618068 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.636394 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.655746 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.679879 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.702580 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.715215 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.715284 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.715293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.715307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.715317 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.717751 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.740460 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.758020 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.776393 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.790900 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.808831 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.818635 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.818695 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.818712 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.818734 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.818750 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.824890 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.845083 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.862851 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.881719 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.911535 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.921813 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.921940 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.921959 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.921981 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.922032 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:33Z","lastTransitionTime":"2025-12-04T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:33 crc kubenswrapper[4841]: I1204 09:19:33.926132 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:33Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.025819 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.025866 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.025878 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.025893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.025904 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.129113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.129183 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.129200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.129225 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.129242 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.232054 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.232115 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.232125 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.232144 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.232156 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.335203 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.335241 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.335249 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.335261 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.335270 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.438370 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.438436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.438453 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.438478 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.438496 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.542059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.542092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.542103 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.542117 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.542127 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.616402 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:34 crc kubenswrapper[4841]: E1204 09:19:34.616533 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.644569 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.644631 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.644652 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.644680 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.644701 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.747859 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.748038 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.748068 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.748173 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.748218 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.851902 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.851961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.851977 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.851997 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.852011 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.955490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.955554 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.955567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.955592 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:34 crc kubenswrapper[4841]: I1204 09:19:34.955608 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:34Z","lastTransitionTime":"2025-12-04T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.058339 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.058393 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.058405 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.058422 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.058434 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.162167 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.162235 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.162258 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.162284 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.162302 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.266076 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.266135 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.266152 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.266180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.266198 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.370175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.370251 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.370274 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.370301 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.370321 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.473490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.473551 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.473570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.473598 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.473616 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.576188 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.576243 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.576255 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.576269 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.576278 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.615970 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.616049 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:35 crc kubenswrapper[4841]: E1204 09:19:35.616087 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:35 crc kubenswrapper[4841]: E1204 09:19:35.616207 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.616272 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:35 crc kubenswrapper[4841]: E1204 09:19:35.616397 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.678968 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.679019 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.679034 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.679055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.679071 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.782229 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.782289 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.782307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.782336 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.782354 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.885389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.885437 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.885449 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.885467 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.885480 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.988743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.988831 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.988851 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.988877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:35 crc kubenswrapper[4841]: I1204 09:19:35.988895 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:35Z","lastTransitionTime":"2025-12-04T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.092442 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.092498 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.092514 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.092536 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.092552 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.196704 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.196744 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.196755 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.196802 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.196814 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.299815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.299872 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.299884 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.299902 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.299922 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.409232 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.409318 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.409339 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.410342 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.410405 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.513970 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.514018 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.514035 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.514056 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.514072 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.615692 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:36 crc kubenswrapper[4841]: E1204 09:19:36.615943 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.616949 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.617037 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.617057 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.617077 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.617094 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.719726 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.720166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.720191 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.720223 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.720246 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.823172 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.823489 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.823600 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.823724 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.823827 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.926635 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.926693 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.926711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.926734 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:36 crc kubenswrapper[4841]: I1204 09:19:36.926752 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:36Z","lastTransitionTime":"2025-12-04T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.029341 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.029404 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.029421 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.029447 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.029465 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.133013 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.133070 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.133087 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.133111 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.133129 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.237712 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.238000 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.238143 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.238237 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.238335 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.341546 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.341606 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.341627 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.341651 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.341667 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.444952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.445315 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.445473 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.445710 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.445971 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.549502 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.549733 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.549831 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.549950 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.550027 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.615970 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.616073 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:37 crc kubenswrapper[4841]: E1204 09:19:37.616170 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.616298 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:37 crc kubenswrapper[4841]: E1204 09:19:37.616438 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:37 crc kubenswrapper[4841]: E1204 09:19:37.616572 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.653409 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.653473 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.653490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.653516 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.653534 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.756842 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.757162 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.757254 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.757334 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.757425 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.860335 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.860389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.860406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.860431 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.860449 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.962631 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.962681 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.962702 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.962725 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:37 crc kubenswrapper[4841]: I1204 09:19:37.962744 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:37Z","lastTransitionTime":"2025-12-04T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.065976 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.066022 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.066033 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.066049 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.066060 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.169755 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.169849 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.169865 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.169889 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.169906 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.273588 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.273659 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.273676 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.273700 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.273718 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.376245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.376290 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.376302 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.376318 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.376327 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.478544 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.478589 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.478598 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.478609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.478618 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.580984 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.581024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.581033 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.581046 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.581054 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.616480 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:38 crc kubenswrapper[4841]: E1204 09:19:38.616621 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.688337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.688406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.688848 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.689119 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.689174 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.792825 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.792863 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.792873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.792888 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.792899 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.895522 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.895559 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.895570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.895586 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.895597 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.998112 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.998184 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.998205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.998229 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:38 crc kubenswrapper[4841]: I1204 09:19:38.998246 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:38Z","lastTransitionTime":"2025-12-04T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.100890 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.100961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.100983 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.101013 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.101103 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.203887 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.203963 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.203984 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.204015 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.204035 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.306907 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.306948 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.306961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.307008 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.307020 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.411171 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.411273 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.411299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.411337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.411362 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.514352 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.514406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.514424 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.514447 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.514463 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.617256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.617285 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.617295 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.617308 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.617317 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.623099 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:39 crc kubenswrapper[4841]: E1204 09:19:39.623185 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.623472 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:39 crc kubenswrapper[4841]: E1204 09:19:39.623526 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.623562 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:39 crc kubenswrapper[4841]: E1204 09:19:39.623607 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.719240 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.719309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.719330 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.719358 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.719379 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.822557 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.822615 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.822632 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.822660 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.822677 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.925789 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.925875 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.925887 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.925909 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:39 crc kubenswrapper[4841]: I1204 09:19:39.925925 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:39Z","lastTransitionTime":"2025-12-04T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.029185 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.029243 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.029267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.029289 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.029306 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.133133 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.133203 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.133213 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.133233 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.133244 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.237022 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.237086 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.237103 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.237128 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.237150 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.339877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.339918 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.339928 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.339965 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.339976 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.442709 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.442794 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.442808 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.442826 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.442838 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.545548 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.545594 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.545603 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.545615 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.545623 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.615967 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:40 crc kubenswrapper[4841]: E1204 09:19:40.616582 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.617047 4841 scope.go:117] "RemoveContainer" containerID="07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.648996 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.649055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.649067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.649087 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.649104 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.752224 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.752265 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.752276 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.752293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.752307 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.855107 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.855155 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.855166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.855182 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.855200 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.958625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.958677 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.958692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.958713 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.958728 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:40Z","lastTransitionTime":"2025-12-04T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.977966 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/1.log" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.980146 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28"} Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.980680 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:19:40 crc kubenswrapper[4841]: I1204 09:19:40.994432 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:40Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.008494 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.017826 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.036857 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.052599 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.061008 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.061165 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.061182 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.061205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.061222 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.072042 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.094111 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.111303 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.131227 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.146797 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.158543 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.163233 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.163292 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.163302 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.163317 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.163327 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.171647 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.184426 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.199728 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.209844 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.224402 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.265300 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.265366 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.265381 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.265398 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.265899 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.368808 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.368877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.368895 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.368924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.368943 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.471450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.471518 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.471533 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.471556 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.471573 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.557908 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.558028 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.558094 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:19:57.558077068 +0000 UTC m=+64.309867272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.574508 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.574580 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.574602 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.574630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.574654 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.616141 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.616143 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.616310 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.616400 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.616701 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.616811 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.678161 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.678214 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.678226 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.678243 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.678256 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.780611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.780656 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.780672 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.780696 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.780712 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.795874 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.795932 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.795956 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.795981 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.796001 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.816443 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.820972 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.821005 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.821016 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.821030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.821039 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.823836 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.836280 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.841939 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.845961 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.846988 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.847026 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.847036 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.847050 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.847059 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.863066 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.866186 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.867917 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.867940 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.867949 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.867964 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.867973 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.883016 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.887276 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.890792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.890845 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.890861 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.890880 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.890895 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.895401 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.905133 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.905387 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.906681 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.907841 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.907875 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.907885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.907900 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.907909 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:41Z","lastTransitionTime":"2025-12-04T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.919640 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.930822 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.945093 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.961343 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.977810 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.984640 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/2.log" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.985230 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/1.log" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.987509 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" exitCode=1 Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.987562 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28"} Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.987625 4841 scope.go:117] "RemoveContainer" containerID="07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.988623 4841 scope.go:117] "RemoveContainer" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" Dec 04 09:19:41 crc kubenswrapper[4841]: E1204 09:19:41.988905 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:19:41 crc kubenswrapper[4841]: I1204 09:19:41.999005 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:41Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010533 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010578 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010589 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010608 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010620 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.010785 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.021463 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.033659 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.048339 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.061273 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.070343 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.082891 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.094269 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.106556 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.113186 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.113215 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.113223 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.113236 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.113247 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.117338 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.128815 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.142266 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.168016 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07dc5049992cca1556d39d6e0117994c4fe74a28002cf4aae4910bbdf9c549b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"message\\\":\\\"ent handler 6 for removal\\\\nI1204 09:19:24.222460 6320 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1204 09:19:24.222469 6320 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:19:24.222484 6320 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:19:24.222492 6320 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 09:19:24.222491 6320 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1204 09:19:24.222512 6320 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1204 09:19:24.222626 6320 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.222660 6320 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 09:19:24.223139 6320 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:19:24.223164 6320 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 09:19:24.223184 6320 factory.go:656] Stopping watch factory\\\\nI1204 09:19:24.223195 6320 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:19:24.223214 6320 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.183523 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.197503 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.221809 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.224271 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.224300 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.224310 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.224328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.224341 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.260608 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.280526 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.296637 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.306678 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.317964 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.328117 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.328173 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.328191 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.328211 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.328226 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.330385 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:42Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.430217 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.430262 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.430275 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.430291 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.430303 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.532645 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.532692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.532703 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.532719 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.532730 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.615844 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:42 crc kubenswrapper[4841]: E1204 09:19:42.616068 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.636126 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.636191 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.636215 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.636242 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.636263 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.739380 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.739493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.739517 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.739544 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.739567 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.842053 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.842099 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.842107 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.842121 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.842130 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.944665 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.944971 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.944980 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.944993 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.945001 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:42Z","lastTransitionTime":"2025-12-04T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.991909 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/2.log" Dec 04 09:19:42 crc kubenswrapper[4841]: I1204 09:19:42.995042 4841 scope.go:117] "RemoveContainer" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" Dec 04 09:19:42 crc kubenswrapper[4841]: E1204 09:19:42.995220 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.007532 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.017659 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.027705 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.037288 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.047156 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.047210 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.047219 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.047235 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.047245 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.050011 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.061873 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.073980 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.084626 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.101125 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.112864 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.126270 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.137550 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.149722 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.149785 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.149801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.149820 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.149835 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.155026 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.170266 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.185785 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.196600 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.210089 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.252098 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.252157 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.252175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.252198 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.252216 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.354803 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.354847 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.354860 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.354881 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.354893 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.377815 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.377984 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:20:15.377956991 +0000 UTC m=+82.129747215 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.457806 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.457885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.457907 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.457936 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.457959 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.479534 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.479584 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.479604 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.479623 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479723 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479749 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479794 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479804 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479827 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:20:15.479792639 +0000 UTC m=+82.231582843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479887 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479904 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479916 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.479851 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:20:15.47984181 +0000 UTC m=+82.231632124 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.480223 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.480245 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:20:15.480235139 +0000 UTC m=+82.232025343 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.480432 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:20:15.480397083 +0000 UTC m=+82.232187287 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.560340 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.560420 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.560445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.560477 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.560535 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.616306 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.616394 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.616453 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.616608 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.616714 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:43 crc kubenswrapper[4841]: E1204 09:19:43.616926 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.631536 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.647787 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.663189 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.663233 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.663245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.663266 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.663281 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.671348 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.685895 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.704199 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.728640 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.745985 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.765239 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.766722 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.766782 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.766795 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.766810 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.766822 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.783255 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.799878 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.818890 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.838399 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.851242 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.869845 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.870345 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.871126 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.871220 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.871309 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.871679 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.883442 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.897836 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.914378 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:43Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.974169 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.974209 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.974220 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.974235 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:43 crc kubenswrapper[4841]: I1204 09:19:43.974244 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:43Z","lastTransitionTime":"2025-12-04T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.077071 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.077120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.077129 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.077143 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.077153 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.179625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.179671 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.179682 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.179696 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.179706 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.282743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.282800 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.282819 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.282835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.282846 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.385085 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.385387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.385499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.385629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.385741 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.488873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.488966 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.488981 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.488997 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.489009 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.591481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.591519 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.591527 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.591540 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.591548 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.616017 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:44 crc kubenswrapper[4841]: E1204 09:19:44.616168 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.693846 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.693886 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.693898 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.693915 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.693927 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.796620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.796653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.796661 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.796675 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.796684 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.898947 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.898987 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.898998 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.899014 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:44 crc kubenswrapper[4841]: I1204 09:19:44.899024 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:44Z","lastTransitionTime":"2025-12-04T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.000873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.000904 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.000912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.000923 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.000932 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.103872 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.103918 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.103930 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.103950 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.103962 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.206421 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.206471 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.206485 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.206502 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.206513 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.309320 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.309394 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.309416 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.309444 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.309461 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.412518 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.412588 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.412610 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.412639 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.412657 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.515170 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.515302 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.515328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.515356 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.515378 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.616590 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.616653 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.616721 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:45 crc kubenswrapper[4841]: E1204 09:19:45.616831 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:45 crc kubenswrapper[4841]: E1204 09:19:45.616964 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:45 crc kubenswrapper[4841]: E1204 09:19:45.617123 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.619030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.619087 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.619105 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.619128 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.619145 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.722316 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.722376 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.722392 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.722414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.722438 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.825215 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.825248 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.825260 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.825276 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.825288 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.928711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.928744 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.928754 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.928783 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:45 crc kubenswrapper[4841]: I1204 09:19:45.928794 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:45Z","lastTransitionTime":"2025-12-04T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.030999 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.031076 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.031108 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.031137 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.031161 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.133737 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.133815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.133835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.133870 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.133887 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.236964 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.237024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.237038 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.237058 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.237076 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.339279 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.339346 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.339357 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.339373 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.339385 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.442616 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.442700 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.442733 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.442802 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.442828 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.546125 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.546180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.546196 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.546268 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.546286 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.616212 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:46 crc kubenswrapper[4841]: E1204 09:19:46.616355 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.649277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.649358 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.649380 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.649410 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.649432 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.752589 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.752656 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.752680 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.752708 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.752728 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.855464 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.855503 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.855513 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.855531 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.855543 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.958873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.958982 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.959000 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.959024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:46 crc kubenswrapper[4841]: I1204 09:19:46.959072 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:46Z","lastTransitionTime":"2025-12-04T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.063438 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.063511 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.063529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.063550 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.063566 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.166547 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.166592 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.166604 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.166620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.166632 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.268900 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.268971 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.268993 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.269022 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.269045 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.371952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.372016 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.372040 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.372067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.372090 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.475445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.475540 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.475552 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.475568 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.475581 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.578662 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.578698 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.578725 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.578738 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.578747 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.615803 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.615910 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:47 crc kubenswrapper[4841]: E1204 09:19:47.615995 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.616005 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:47 crc kubenswrapper[4841]: E1204 09:19:47.616106 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:47 crc kubenswrapper[4841]: E1204 09:19:47.616217 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.681629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.681688 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.681705 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.681727 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.681743 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.784098 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.784125 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.784133 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.784145 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.784153 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.886905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.886940 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.886950 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.886965 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.886973 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.989580 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.989664 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.989690 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.989721 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:47 crc kubenswrapper[4841]: I1204 09:19:47.989752 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:47Z","lastTransitionTime":"2025-12-04T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.091718 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.091792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.091805 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.091826 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.091840 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.195194 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.195255 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.195277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.195306 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.195329 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.298382 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.298538 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.298559 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.298611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.298629 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.401059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.401111 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.401146 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.401166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.401178 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.503559 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.503625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.503642 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.503671 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.503692 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.606276 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.606649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.606683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.606712 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.606734 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.616435 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:48 crc kubenswrapper[4841]: E1204 09:19:48.616556 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.710095 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.710124 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.710132 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.710145 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.710154 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.812385 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.812431 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.812441 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.812457 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.812470 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.915723 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.915884 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.915919 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.915951 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:48 crc kubenswrapper[4841]: I1204 09:19:48.915982 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:48Z","lastTransitionTime":"2025-12-04T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.018315 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.018349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.018357 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.018369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.018379 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.121586 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.121668 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.121693 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.121731 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.121756 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.225622 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.225689 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.225707 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.225731 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.225749 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.328519 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.328574 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.328587 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.328989 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.329016 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.434854 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.434900 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.434909 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.434924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.434934 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.537382 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.537598 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.537657 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.537754 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.537838 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.616505 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:49 crc kubenswrapper[4841]: E1204 09:19:49.616686 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.617084 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.617193 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:49 crc kubenswrapper[4841]: E1204 09:19:49.617246 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:49 crc kubenswrapper[4841]: E1204 09:19:49.617373 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.640600 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.640662 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.640679 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.640701 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.640718 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.743789 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.743834 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.743853 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.743876 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.743894 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.846682 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.846807 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.846834 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.846868 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.846890 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.949700 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.949835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.949855 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.949877 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:49 crc kubenswrapper[4841]: I1204 09:19:49.949893 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:49Z","lastTransitionTime":"2025-12-04T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.052948 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.053007 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.053030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.053061 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.053083 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.156020 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.156067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.156082 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.156100 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.156113 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.259266 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.259328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.259349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.259371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.259386 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.362298 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.362359 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.362370 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.362387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.362399 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.465387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.465462 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.465479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.465501 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.465519 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.567854 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.567905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.567915 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.567928 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.567938 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.616690 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:50 crc kubenswrapper[4841]: E1204 09:19:50.616964 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.671067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.671188 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.671205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.671231 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.671248 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.774500 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.774578 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.774616 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.774646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.774673 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.877439 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.877500 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.877522 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.877551 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.877570 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.980041 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.980283 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.980395 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.980472 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:50 crc kubenswrapper[4841]: I1204 09:19:50.980538 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:50Z","lastTransitionTime":"2025-12-04T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.083465 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.083972 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.084173 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.084360 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.084548 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.187632 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.187686 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.187704 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.187729 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.187748 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.290648 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.290907 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.291024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.291113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.291197 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.394419 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.395080 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.395229 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.395357 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.395465 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.499052 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.499132 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.499168 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.499199 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.499239 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.601919 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.602161 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.602251 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.602369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.602450 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.616272 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:51 crc kubenswrapper[4841]: E1204 09:19:51.616448 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.616301 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:51 crc kubenswrapper[4841]: E1204 09:19:51.616623 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.616272 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:51 crc kubenswrapper[4841]: E1204 09:19:51.616841 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.704630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.704667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.704675 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.704689 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.704698 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.807635 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.807686 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.807699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.807714 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.807724 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.910826 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.910908 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.910936 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.910969 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.910993 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.935333 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.935391 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.935408 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.935432 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.935451 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: E1204 09:19:51.957901 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:51Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.962715 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.962784 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.962797 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.962817 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.962829 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:51 crc kubenswrapper[4841]: E1204 09:19:51.983483 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:51Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.988228 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.988299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.988326 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.988357 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:51 crc kubenswrapper[4841]: I1204 09:19:51.988382 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:51Z","lastTransitionTime":"2025-12-04T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: E1204 09:19:52.005875 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:52Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.010465 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.010536 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.010564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.010591 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.010612 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: E1204 09:19:52.026116 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:52Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.030280 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.030336 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.030355 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.030378 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.030394 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: E1204 09:19:52.049731 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:52Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:52 crc kubenswrapper[4841]: E1204 09:19:52.049911 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.051786 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.051852 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.051869 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.051896 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.051914 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.154535 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.154574 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.154584 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.154606 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.154618 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.257628 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.257691 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.257705 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.257725 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.257740 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.361174 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.361245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.361262 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.361290 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.361308 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.464476 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.464541 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.464565 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.464589 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.464606 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.567328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.567393 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.567413 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.567438 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.567455 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.616817 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:52 crc kubenswrapper[4841]: E1204 09:19:52.617090 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.669848 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.669905 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.669924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.669949 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.669967 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.773344 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.773436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.773453 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.773993 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.774123 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.877876 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.877946 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.877961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.877991 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.878006 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.980649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.980707 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.980720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.980743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:52 crc kubenswrapper[4841]: I1204 09:19:52.980782 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:52Z","lastTransitionTime":"2025-12-04T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.083042 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.083078 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.083089 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.083104 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.083116 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.186249 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.186313 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.186332 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.186354 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.186368 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.290217 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.290265 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.290277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.290296 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.290311 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.393581 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.393653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.393667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.393689 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.393706 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.499299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.499366 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.499392 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.499415 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.499434 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.602040 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.602112 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.602132 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.602160 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.602180 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.615784 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.615789 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:53 crc kubenswrapper[4841]: E1204 09:19:53.615939 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:53 crc kubenswrapper[4841]: E1204 09:19:53.616010 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.616972 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:53 crc kubenswrapper[4841]: E1204 09:19:53.617165 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.632466 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.648695 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.669680 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.688004 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.704415 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.704468 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.704486 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.704511 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.704529 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.719285 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.735012 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.753121 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.769627 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.789344 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.808917 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.808972 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.809004 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.809026 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.809043 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.810802 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.822345 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.842938 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.857400 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.871892 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.884900 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.900337 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.911375 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.911434 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.911446 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.911463 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.911498 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:53Z","lastTransitionTime":"2025-12-04T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:53 crc kubenswrapper[4841]: I1204 09:19:53.916873 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:53Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.014843 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.014893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.014909 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.014928 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.014940 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.118113 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.118160 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.118172 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.118188 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.118201 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.221171 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.221244 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.221267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.221296 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.221319 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.323867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.323969 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.323986 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.324015 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.324034 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.426545 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.426585 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.426595 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.426608 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.426617 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.528420 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.528467 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.528479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.528497 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.528509 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.616869 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:54 crc kubenswrapper[4841]: E1204 09:19:54.617684 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.618165 4841 scope.go:117] "RemoveContainer" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" Dec 04 09:19:54 crc kubenswrapper[4841]: E1204 09:19:54.618642 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.630827 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.630908 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.630995 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.631072 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.631102 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.734570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.734633 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.734648 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.734664 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.735011 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.838956 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.839001 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.839012 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.839029 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.839041 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.942459 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.942511 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.942529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.942546 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:54 crc kubenswrapper[4841]: I1204 09:19:54.942557 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:54Z","lastTransitionTime":"2025-12-04T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.045606 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.045668 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.045691 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.045720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.045741 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.148660 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.148720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.148742 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.148801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.148826 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.251856 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.251906 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.251922 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.251947 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.251963 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.354646 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.354680 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.354691 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.354703 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.354712 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.457523 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.457559 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.457567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.457582 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.457591 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.561192 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.561257 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.561278 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.561306 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.561325 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.616554 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:55 crc kubenswrapper[4841]: E1204 09:19:55.616733 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.616576 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:55 crc kubenswrapper[4841]: E1204 09:19:55.617013 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.617036 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:55 crc kubenswrapper[4841]: E1204 09:19:55.617196 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.664715 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.664841 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.664864 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.664893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.664912 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.767662 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.768057 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.768070 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.768087 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.768100 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.871440 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.871481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.871496 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.871512 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.871526 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.974161 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.974216 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.974233 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.974257 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:55 crc kubenswrapper[4841]: I1204 09:19:55.974276 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:55Z","lastTransitionTime":"2025-12-04T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.076268 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.076300 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.076308 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.076322 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.076332 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.179609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.180141 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.180218 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.180297 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.180374 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.283191 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.283245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.283259 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.283280 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.283294 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.386218 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.386282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.386305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.386333 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.386354 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.488482 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.488846 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.489048 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.489201 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.489393 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.592510 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.592547 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.592557 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.592573 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.592587 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.616633 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:56 crc kubenswrapper[4841]: E1204 09:19:56.617480 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.696401 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.696752 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.696939 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.697090 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.697218 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.799033 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.799098 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.799119 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.799147 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.799167 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.902240 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.902306 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.902327 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.902352 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:56 crc kubenswrapper[4841]: I1204 09:19:56.902369 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:56Z","lastTransitionTime":"2025-12-04T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.004975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.005379 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.005475 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.005754 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.005876 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.109119 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.109183 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.109201 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.109224 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.109241 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.212449 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.212485 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.212493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.212509 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.212517 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.315209 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.315252 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.315267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.315285 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.315300 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.418159 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.418239 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.418258 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.418282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.418304 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.520711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.520885 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.520911 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.520943 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.520973 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.616093 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.616101 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:57 crc kubenswrapper[4841]: E1204 09:19:57.616298 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.616121 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:57 crc kubenswrapper[4841]: E1204 09:19:57.616455 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:57 crc kubenswrapper[4841]: E1204 09:19:57.616565 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.622883 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:57 crc kubenswrapper[4841]: E1204 09:19:57.623149 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:57 crc kubenswrapper[4841]: E1204 09:19:57.623245 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:20:29.6232152 +0000 UTC m=+96.375005444 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.624022 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.624055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.624067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.624083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.624094 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.726542 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.726606 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.726618 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.726635 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.726646 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.829086 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.829120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.829134 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.829149 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.829161 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.932059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.932127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.932144 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.932172 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:57 crc kubenswrapper[4841]: I1204 09:19:57.932190 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:57Z","lastTransitionTime":"2025-12-04T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.035121 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.035153 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.035185 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.035206 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.035216 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.137330 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.137373 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.137385 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.137400 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.137412 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.241213 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.241271 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.241287 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.241309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.241324 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.344692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.344808 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.344826 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.344848 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.344865 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.447783 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.447840 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.447849 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.447861 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.447870 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.549748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.549818 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.549830 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.549844 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.549855 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.616228 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:19:58 crc kubenswrapper[4841]: E1204 09:19:58.616440 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.652033 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.652067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.652078 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.652091 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.652102 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.754621 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.754663 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.754671 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.754681 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.754690 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.856418 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.856470 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.856482 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.856494 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.856520 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.958979 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.959060 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.959075 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.959092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:58 crc kubenswrapper[4841]: I1204 09:19:58.959104 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:58Z","lastTransitionTime":"2025-12-04T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.047705 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/0.log" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.047779 4841 generic.go:334] "Generic (PLEG): container finished" podID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" containerID="6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0" exitCode=1 Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.047811 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerDied","Data":"6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.048169 4841 scope.go:117] "RemoveContainer" containerID="6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061251 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061283 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061295 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061310 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061322 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.061519 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.073072 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:58Z\\\",\\\"message\\\":\\\"2025-12-04T09:19:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367\\\\n2025-12-04T09:19:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367 to /host/opt/cni/bin/\\\\n2025-12-04T09:19:13Z [verbose] multus-daemon started\\\\n2025-12-04T09:19:13Z [verbose] Readiness Indicator file check\\\\n2025-12-04T09:19:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.086357 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.096623 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.108597 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.123630 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.133669 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.145551 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.158234 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.163131 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.163152 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.163159 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.163182 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.163192 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.170865 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.180750 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.191720 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.205188 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.223940 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.233072 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.243468 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.256651 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:19:59Z is after 2025-08-24T17:21:41Z" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.265134 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.265159 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.265169 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.265183 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.265194 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.367801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.367867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.367887 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.367909 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.367925 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.470217 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.470267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.470281 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.470299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.470310 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.572236 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.572470 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.572480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.572494 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.572505 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.615814 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.615847 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.615855 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:19:59 crc kubenswrapper[4841]: E1204 09:19:59.615929 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:19:59 crc kubenswrapper[4841]: E1204 09:19:59.616016 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:19:59 crc kubenswrapper[4841]: E1204 09:19:59.616096 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.674660 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.674695 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.674703 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.674716 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.674724 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.777247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.777289 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.777298 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.777311 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.777319 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.880379 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.880440 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.880458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.880484 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.880502 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.982661 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.982707 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.982718 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.982736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:19:59 crc kubenswrapper[4841]: I1204 09:19:59.982748 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:19:59Z","lastTransitionTime":"2025-12-04T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.051673 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/0.log" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.051716 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerStarted","Data":"d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.065574 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.080436 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.084685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.084717 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.084730 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.084743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.084753 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.094566 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.108524 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.123294 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.148405 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.162059 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.175017 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.186987 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.187040 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.187058 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.187083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.187102 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.188109 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.204092 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:58Z\\\",\\\"message\\\":\\\"2025-12-04T09:19:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367\\\\n2025-12-04T09:19:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367 to /host/opt/cni/bin/\\\\n2025-12-04T09:19:13Z [verbose] multus-daemon started\\\\n2025-12-04T09:19:13Z [verbose] Readiness Indicator file check\\\\n2025-12-04T09:19:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.221835 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.235951 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.251405 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.267663 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.282215 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.289228 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.289283 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.289301 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.289364 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.289388 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.297246 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.313281 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:00Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.391677 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.391719 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.391729 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.391744 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.391755 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.494683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.494871 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.494939 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.495005 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.495059 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.598398 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.598496 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.598571 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.598645 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.598710 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.616105 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:00 crc kubenswrapper[4841]: E1204 09:20:00.616262 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.701224 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.701262 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.701274 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.701293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.701304 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.804184 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.804243 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.804260 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.804282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.804299 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.906564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.906723 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.906841 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.906938 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:00 crc kubenswrapper[4841]: I1204 09:20:00.907007 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:00Z","lastTransitionTime":"2025-12-04T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.008717 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.009002 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.009083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.009157 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.009245 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.111352 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.111419 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.111507 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.111561 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.111585 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.213933 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.213960 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.213968 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.213980 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.213989 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.316187 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.316230 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.316248 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.316269 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.316287 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.419402 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.419450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.419467 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.419490 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.419503 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.521845 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.521893 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.521904 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.521921 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.521932 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.616307 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.616358 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.616405 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:01 crc kubenswrapper[4841]: E1204 09:20:01.616514 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:01 crc kubenswrapper[4841]: E1204 09:20:01.616645 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:01 crc kubenswrapper[4841]: E1204 09:20:01.616900 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.623788 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.623827 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.623838 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.623853 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.623867 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.726538 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.726574 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.726585 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.726600 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.726610 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.828564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.828612 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.828625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.828643 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.828681 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.931358 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.931412 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.931424 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.931436 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:01 crc kubenswrapper[4841]: I1204 09:20:01.931445 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:01Z","lastTransitionTime":"2025-12-04T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.034237 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.034286 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.034301 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.034318 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.034330 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.085225 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.085267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.085278 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.085294 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.085306 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.099612 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:02Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.103383 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.103432 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.103448 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.103473 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.103489 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.123703 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:02Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.128198 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.128260 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.128280 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.128305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.128321 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.143687 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:02Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.147328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.147363 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.147375 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.147392 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.147403 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.164231 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:02Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.167911 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.167940 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.167948 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.167963 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.167972 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.184798 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:02Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.184941 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.186423 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.186452 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.186462 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.186499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.186512 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.289059 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.289092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.289103 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.289120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.289132 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.391282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.391322 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.391334 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.391371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.391382 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.493616 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.493658 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.493669 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.493686 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.493697 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.596668 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.596711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.596719 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.596736 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.596790 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.616504 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:02 crc kubenswrapper[4841]: E1204 09:20:02.616686 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.699645 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.699692 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.699701 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.699718 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.699728 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.802523 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.802571 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.802588 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.802613 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.802636 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.905309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.905360 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.905377 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.905400 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:02 crc kubenswrapper[4841]: I1204 09:20:02.905418 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:02Z","lastTransitionTime":"2025-12-04T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.007677 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.007720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.007728 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.007744 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.007754 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.110389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.110433 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.110443 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.110473 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.110482 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.213354 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.213458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.213475 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.213499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.213540 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.316714 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.316851 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.316897 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.316929 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.316976 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.420237 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.420331 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.420371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.420398 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.420496 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.524314 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.524372 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.524389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.524416 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.524432 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.616019 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.616048 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.616041 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:03 crc kubenswrapper[4841]: E1204 09:20:03.616314 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:03 crc kubenswrapper[4841]: E1204 09:20:03.616617 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:03 crc kubenswrapper[4841]: E1204 09:20:03.616720 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.629015 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.629090 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.629099 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.629114 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.629125 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.644057 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.659592 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.676417 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.694632 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.717355 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:58Z\\\",\\\"message\\\":\\\"2025-12-04T09:19:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367\\\\n2025-12-04T09:19:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367 to /host/opt/cni/bin/\\\\n2025-12-04T09:19:13Z [verbose] multus-daemon started\\\\n2025-12-04T09:19:13Z [verbose] Readiness Indicator file check\\\\n2025-12-04T09:19:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.730720 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.730742 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.730750 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.730787 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.730796 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.737622 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.749202 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.762238 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.777253 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.785586 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.794839 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.814700 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.827992 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.832493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.832533 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.832543 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.832555 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.832566 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.842071 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.856304 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.872159 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.884094 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:03Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.934726 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.934775 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.934784 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.934796 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:03 crc kubenswrapper[4841]: I1204 09:20:03.934805 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:03Z","lastTransitionTime":"2025-12-04T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.037175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.037207 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.037215 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.037228 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.037239 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.139177 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.139428 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.139489 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.139553 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.139620 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.242096 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.242127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.242136 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.242150 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.242159 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.344615 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.344656 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.344664 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.344681 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.344690 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.446935 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.446965 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.446974 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.446986 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.446995 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.549198 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.549748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.549868 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.549965 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.550045 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.616589 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:04 crc kubenswrapper[4841]: E1204 09:20:04.616983 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.652748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.652837 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.652853 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.652875 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.652892 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.755307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.755355 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.755365 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.755382 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.755396 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.858076 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.858153 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.858175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.858205 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.858227 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.961538 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.961635 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.961654 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.961678 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:04 crc kubenswrapper[4841]: I1204 09:20:04.961695 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:04Z","lastTransitionTime":"2025-12-04T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.064474 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.064550 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.064573 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.064600 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.064622 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.167932 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.167990 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.168007 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.168028 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.168044 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.271555 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.271629 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.271652 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.271683 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.271707 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.374653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.374698 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.374711 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.374729 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.374741 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.477461 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.477726 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.477849 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.477945 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.478020 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.581049 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.581366 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.581450 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.581541 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.581623 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.616667 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.616854 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.616923 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:05 crc kubenswrapper[4841]: E1204 09:20:05.617065 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:05 crc kubenswrapper[4841]: E1204 09:20:05.617722 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:05 crc kubenswrapper[4841]: E1204 09:20:05.617840 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.619019 4841 scope.go:117] "RemoveContainer" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.685497 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.685575 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.685594 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.685619 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.685640 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.789303 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.789345 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.789354 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.789368 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.789379 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.892299 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.892348 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.892356 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.892370 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.892379 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.995157 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.995200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.995208 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.995221 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:05 crc kubenswrapper[4841]: I1204 09:20:05.995230 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:05Z","lastTransitionTime":"2025-12-04T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.098700 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.098744 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.098754 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.098792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.098804 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.201372 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.201406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.201416 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.201429 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.201438 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.303954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.304014 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.304028 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.304051 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.304070 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.407276 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.407323 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.407333 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.407348 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.407360 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.509343 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.509403 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.509413 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.509430 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.509439 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.612201 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.612242 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.612251 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.612264 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.612273 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.616575 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:06 crc kubenswrapper[4841]: E1204 09:20:06.616685 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.760429 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.760493 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.760512 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.760537 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.760556 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.863206 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.863239 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.863247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.863259 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.863268 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.965841 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.965924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.965945 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.965975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:06 crc kubenswrapper[4841]: I1204 09:20:06.966001 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:06Z","lastTransitionTime":"2025-12-04T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.068942 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.069005 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.069024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.069048 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.069066 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.098876 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/3.log" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.099832 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/2.log" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.103732 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" exitCode=1 Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.103807 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.103862 4841 scope.go:117] "RemoveContainer" containerID="05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.105231 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:20:07 crc kubenswrapper[4841]: E1204 09:20:07.105554 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.127098 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.140939 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.154454 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.168158 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.172438 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.172486 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.172500 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.172521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.172730 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.182013 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.195638 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.211488 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.226377 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.238486 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.268862 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:20:07Z\\\",\\\"message\\\":\\\"for removal\\\\nI1204 09:20:06.849847 6903 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:20:06.849875 6903 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1204 09:20:06.849903 6903 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 09:20:06.849920 6903 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:20:06.849928 6903 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1204 09:20:06.849931 6903 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:20:06.849885 6903 factory.go:656] Stopping watch factory\\\\nI1204 09:20:06.850026 6903 factory.go:1336] Added *v1.Node event handler 7\\\\nI1204 09:20:06.850075 6903 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1204 09:20:06.850458 6903 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1204 09:20:06.850583 6903 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1204 09:20:06.850639 6903 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:20:06.850674 6903 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1204 09:20:06.850756 6903 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.274616 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.274667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.274680 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.274699 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.274712 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.281408 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.292257 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.302474 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.313509 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.325234 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:58Z\\\",\\\"message\\\":\\\"2025-12-04T09:19:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367\\\\n2025-12-04T09:19:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367 to /host/opt/cni/bin/\\\\n2025-12-04T09:19:13Z [verbose] multus-daemon started\\\\n2025-12-04T09:19:13Z [verbose] Readiness Indicator file check\\\\n2025-12-04T09:19:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.337357 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.346171 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:07Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.376794 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.376838 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.376850 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.376865 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.376878 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.479261 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.479317 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.479330 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.479347 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.479358 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.582648 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.582708 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.582722 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.582738 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.582750 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.616860 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:07 crc kubenswrapper[4841]: E1204 09:20:07.617036 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.617330 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:07 crc kubenswrapper[4841]: E1204 09:20:07.617470 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.617840 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:07 crc kubenswrapper[4841]: E1204 09:20:07.617956 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.685880 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.685924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.685934 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.685949 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.685960 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.789026 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.789086 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.789099 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.789126 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.789141 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.891746 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.891819 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.891835 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.891861 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.891873 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.994743 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.994832 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.994849 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.994872 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:07 crc kubenswrapper[4841]: I1204 09:20:07.994890 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:07Z","lastTransitionTime":"2025-12-04T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.097091 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.097138 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.097155 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.097179 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.097195 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.108954 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/3.log" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.200447 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.200504 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.200523 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.200546 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.200564 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.304065 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.304101 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.304112 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.304128 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.304140 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.407247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.407314 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.407331 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.407356 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.407373 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.510867 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.510913 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.510931 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.510952 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.510968 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.614227 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.614291 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.614312 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.614340 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.614361 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.616530 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:08 crc kubenswrapper[4841]: E1204 09:20:08.616709 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.717478 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.717550 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.717565 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.717584 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.717603 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.821349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.821425 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.821451 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.821480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.821501 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.923623 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.923673 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.923689 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.923708 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:08 crc kubenswrapper[4841]: I1204 09:20:08.923725 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:08Z","lastTransitionTime":"2025-12-04T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.026263 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.026312 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.026329 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.026349 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.026365 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.128859 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.128924 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.128942 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.128969 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.128988 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.231499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.231561 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.231578 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.231602 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.231618 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.334691 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.334803 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.334828 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.334856 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.334878 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.437653 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.437734 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.437756 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.437817 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.437841 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.541074 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.541201 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.541221 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.541243 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.541261 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.616496 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:09 crc kubenswrapper[4841]: E1204 09:20:09.616688 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.616516 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.616796 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:09 crc kubenswrapper[4841]: E1204 09:20:09.616942 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:09 crc kubenswrapper[4841]: E1204 09:20:09.617042 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.644036 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.644094 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.644111 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.644136 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.644156 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.746710 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.746802 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.746825 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.746848 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.746863 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.850466 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.850529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.850542 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.850563 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.850578 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.953841 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.953919 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.953937 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.953959 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:09 crc kubenswrapper[4841]: I1204 09:20:09.953977 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:09Z","lastTransitionTime":"2025-12-04T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.056564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.056608 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.056619 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.056636 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.056647 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.159390 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.159457 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.159480 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.159510 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.159533 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.262426 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.262470 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.262485 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.262503 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.262516 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.365032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.365114 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.365140 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.365167 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.365189 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.468369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.468426 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.468443 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.468468 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.468487 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.571127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.571245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.571267 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.571294 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.571314 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.616223 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:10 crc kubenswrapper[4841]: E1204 09:20:10.616349 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.674834 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.674904 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.674930 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.674960 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.674984 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.777681 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.777712 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.777721 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.777735 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.777745 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.880584 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.880630 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.880647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.880669 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.880688 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.983869 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.983927 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.983947 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.983975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:10 crc kubenswrapper[4841]: I1204 09:20:10.983996 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:10Z","lastTransitionTime":"2025-12-04T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.087334 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.087401 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.087481 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.087515 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.087535 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.190472 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.190549 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.190567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.190590 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.190607 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.292713 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.292779 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.292792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.292811 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.292823 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.395127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.395163 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.395174 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.395190 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.395202 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.499528 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.499591 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.499610 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.499634 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.499866 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.603110 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.603154 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.603164 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.603180 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.603191 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.616040 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.616191 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.616297 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:11 crc kubenswrapper[4841]: E1204 09:20:11.616857 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:11 crc kubenswrapper[4841]: E1204 09:20:11.616906 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:11 crc kubenswrapper[4841]: E1204 09:20:11.617049 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.705790 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.705843 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.705860 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.705883 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.705900 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.809673 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.809738 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.809754 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.809801 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.809818 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.912860 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.912991 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.913012 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.913035 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:11 crc kubenswrapper[4841]: I1204 09:20:11.913052 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:11Z","lastTransitionTime":"2025-12-04T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.015860 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.015941 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.015963 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.015987 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.016006 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.118247 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.118300 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.118319 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.118343 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.118361 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.221393 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.221456 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.221472 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.221495 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.221514 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.324740 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.324847 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.324871 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.324901 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.324925 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.402282 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.402359 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.402381 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.402414 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.402436 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.425138 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.429552 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.429628 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.429647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.429671 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.429692 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.447294 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.451373 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.451430 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.451448 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.451470 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.451486 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.469271 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.473853 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.473902 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.473916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.473933 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.473945 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.494633 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.499166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.499204 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.499220 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.499242 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.499257 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.513643 4841 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0ccf9c40-084c-4a44-9660-424570094b73\\\",\\\"systemUUID\\\":\\\"a73eb056-92c5-4c06-b0de-ae9beb3011d0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:12Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.513811 4841 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.515641 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.515695 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.515715 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.515737 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.515756 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.616920 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:12 crc kubenswrapper[4841]: E1204 09:20:12.617144 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.619099 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.619152 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.619174 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.619203 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.619223 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.722401 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.722524 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.722543 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.722568 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.722586 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.825709 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.825786 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.825803 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.825823 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.825838 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.928340 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.928433 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.928458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.928489 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:12 crc kubenswrapper[4841]: I1204 09:20:12.928514 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:12Z","lastTransitionTime":"2025-12-04T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.031222 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.031292 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.031309 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.031335 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.031353 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.135166 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.135239 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.135263 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.135293 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.135374 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.238855 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.238922 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.238944 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.238971 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.238992 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.341387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.341445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.341464 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.341487 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.341504 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.444020 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.444092 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.444117 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.444147 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.444165 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.546101 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.546176 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.546195 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.546219 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.546238 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.616313 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.616409 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:13 crc kubenswrapper[4841]: E1204 09:20:13.616651 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.616685 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:13 crc kubenswrapper[4841]: E1204 09:20:13.616861 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:13 crc kubenswrapper[4841]: E1204 09:20:13.616941 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.636101 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afad1dd-cafc-4c83-9e90-b02c61d10486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4230e5bb2e22e525079e33292e70b06e886b110ab5843bce6cc1dad8eb880549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d73a19b96bfb19305555a0df63e160ecb5f40d02a7303f820dfd5b532250a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmhsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-56vgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.649249 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.649317 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.649329 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.649347 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.649361 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.655098 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e964530e-43bb-4d5d-b646-80e66ac07b78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebf2f062c4151c24a44f090dcf8b3bb19d2557cc894377bdffde4d9a5f093cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41bd2a93028dac1bcb70ae04b6faaab4a9cdf8a8785cdc8399ecd5487117935d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbd5e89e8f1e3abbea534e65d21cc1adef16eea271bdc5f520fc30fee15c821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.670057 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.685169 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d5tkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e52051e-dda2-46c1-8026-af8c26dff263\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1734aa5243134c0054778dc881982cfbaedd593cd4a9caced79b5d92e7b95bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47ll6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d5tkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.703753 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdd240e-976c-408f-9ace-3cd860da98e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f996c9ae43c8815ce599461fe5b44ec16632d6e66aa5a00e392b5e024deaeb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rxw4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.740671 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c56a9daa-a941-4d89-abd0-b7f0472ee869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05d15a34282004b99406eb70dc3aea08c435905791aa53053243bed0d1c03b28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"message\\\":\\\"-metrics-daemon-7t7hn openshift-ovn-kubernetes/ovnkube-node-hhkwl openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-fmcq4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g]\\\\nI1204 09:19:41.596844 6534 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1204 09:19:41.596831 6534 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 09:19:41.596857 6534 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1204 09:19:41.596868 6534 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:20:07Z\\\",\\\"message\\\":\\\"for removal\\\\nI1204 09:20:06.849847 6903 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 09:20:06.849875 6903 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1204 09:20:06.849903 6903 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 09:20:06.849920 6903 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 09:20:06.849928 6903 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1204 09:20:06.849931 6903 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 09:20:06.849885 6903 factory.go:656] Stopping watch factory\\\\nI1204 09:20:06.850026 6903 factory.go:1336] Added *v1.Node event handler 7\\\\nI1204 09:20:06.850075 6903 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1204 09:20:06.850458 6903 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1204 09:20:06.850583 6903 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1204 09:20:06.850639 6903 ovnkube.go:599] Stopped ovnkube\\\\nI1204 09:20:06.850674 6903 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1204 09:20:06.850756 6903 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78nlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hhkwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.752120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.752161 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.752172 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.752189 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.752200 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.756585 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e74f87eb-fb70-4679-93f8-ebe5de564484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crb2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7t7hn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.772427 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a050f163-0b91-4576-bf15-18b2900ade01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208762b3effeb4ef79a3c7ac64874044c6c99cb18b898f4be1c57262e4f7aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8906c735ba3c5099a156d618ad6b8b55919b1efc952d2a5a42a64dcea6e0b69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d010c39b4a6d72a19c06ec6f287e42dd355ec1980ea7579676aef4e3b1ff99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dfc5b103d034aacb573d1d147c210b432f4f952d7ebe57b18e69b815afd4915\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.790030 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ddb33f501a5edd01f5219631994bc47643ee6c720f0c88034dc00a083058df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.804379 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.817164 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893dd14c716ed1b92f45801dd061bcfdbb1dadb02dd82594fef6b732fd9aaf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac83574b5217e1eeec67014035bf82bd55a964dd3dfc37bcfd733b163c8599c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.830615 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb1a2623-885c-4232-bdda-ce68122022f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6216cd342e9fd3939fad2af5ed0541b5fba1cd8639304ce19e3529c88d861747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://060d665062ea95ea4d722a3e130e59b62bd383dd68d6d06baf6ee7c9b0991c65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e05e039886704f7b8f9bbb9dd7f29b35b94cfd057515cf3966f9ca44f99d0076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30d82e3bbd398cb081865440d018c0560ab9e44be1368abbb7de347cd71fb98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226afc6bb402a2c4a6b796aab145fe135f4cf8b6372d9d5f4b01d0cdc65eaf88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7a361af9ba8aad679a14653dd116103165e7008f9d588b0f0c673bfc904e64e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9504131017e7275a3392bafc798c3dc061835a8b75bbbc48f5924bdc473a69c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbhzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lx6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.840100 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fmcq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e112e2-9aab-40e0-bca5-ced078a00cc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303519cc496fe8b323747514fdbcd7f6333c2b3aef0bb798943661f1f96296a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfvfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fmcq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.850750 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6d5f36abb5a925c4e5856e3833bcbb6da58a203edf85a12854f8346428de0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.854427 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.854543 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.854601 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.854673 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.854735 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.861948 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.873472 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-76xdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86bfe6c3-d06e-40b1-9801-74abeb07ae15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T09:19:58Z\\\",\\\"message\\\":\\\"2025-12-04T09:19:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367\\\\n2025-12-04T09:19:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4300b71e-864a-4c2b-9527-04188af34367 to /host/opt/cni/bin/\\\\n2025-12-04T09:19:13Z [verbose] multus-daemon started\\\\n2025-12-04T09:19:13Z [verbose] Readiness Indicator file check\\\\n2025-12-04T09:19:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:19:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbqn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:19:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-76xdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.889239 4841 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccd5d301-250e-4a2f-96c6-58cb258cb360\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T09:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T09:19:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW1204 09:19:11.177255 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1204 09:19:11.177447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 09:19:11.179779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1681952438/tls.crt::/tmp/serving-cert-1681952438/tls.key\\\\\\\"\\\\nI1204 09:19:11.471442 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 09:19:11.473466 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 09:19:11.473483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 09:19:11.473504 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 09:19:11.473509 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 09:19:11.478507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1204 09:19:11.478527 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 09:19:11.478530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 09:19:11.478554 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 09:19:11.478557 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 09:19:11.478559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 09:19:11.478562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 09:19:11.479541 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T09:18:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T09:18:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T09:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T09:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T09:20:13Z is after 2025-08-24T17:21:41Z" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.958572 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.958631 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.958649 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.958674 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:13 crc kubenswrapper[4841]: I1204 09:20:13.958692 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:13Z","lastTransitionTime":"2025-12-04T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.061387 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.061427 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.061439 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.061454 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.061465 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.164015 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.164080 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.164096 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.164120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.164137 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.266186 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.266231 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.266242 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.266256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.266267 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.368898 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.368946 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.368957 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.368971 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.368982 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.472024 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.472084 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.472106 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.472137 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.472159 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.575541 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.575607 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.575625 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.575651 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.575669 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.616185 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:14 crc kubenswrapper[4841]: E1204 09:20:14.616488 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.630691 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.678509 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.678553 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.678569 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.678590 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.678606 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.781224 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.781291 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.781310 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.781333 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.781351 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.883854 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.883899 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.883910 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.883926 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.883938 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.987083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.987127 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.987135 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.987151 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:14 crc kubenswrapper[4841]: I1204 09:20:14.987160 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:14Z","lastTransitionTime":"2025-12-04T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.090427 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.090501 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.090538 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.090567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.090588 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.194473 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.194532 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.194549 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.194571 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.194590 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.298521 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.299036 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.299050 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.299067 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.299081 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.402099 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.402144 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.402160 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.402182 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.402200 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.454697 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.454954 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.454922671 +0000 UTC m=+146.206712945 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.504134 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.504165 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.504188 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.504204 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.504214 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.556320 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.556381 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.556408 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.556433 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556573 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556590 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556603 4841 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556651 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.55663552 +0000 UTC m=+146.308425724 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556870 4841 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.556903 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.556893907 +0000 UTC m=+146.308684111 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557017 4841 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557045 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.557036771 +0000 UTC m=+146.308826975 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557298 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557350 4841 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557379 4841 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.557479 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.557444231 +0000 UTC m=+146.309234475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.606326 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.606362 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.606370 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.606385 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.606397 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.616384 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.616410 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.616569 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.616414 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.616616 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:15 crc kubenswrapper[4841]: E1204 09:20:15.616720 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.709401 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.709472 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.709489 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.709511 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.709529 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.812973 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.813030 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.813047 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.813071 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.813087 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.916239 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.916296 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.916315 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.916341 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:15 crc kubenswrapper[4841]: I1204 09:20:15.916364 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:15Z","lastTransitionTime":"2025-12-04T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.018965 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.019036 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.019062 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.019090 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.019109 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.121179 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.121231 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.121249 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.121272 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.121289 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.224620 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.224749 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.224818 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.224851 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.224876 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.328032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.328105 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.328131 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.328163 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.328185 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.431421 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.431506 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.431529 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.431555 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.431574 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.538587 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.538643 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.538662 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.538685 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.538702 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.616456 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:16 crc kubenswrapper[4841]: E1204 09:20:16.616664 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.640985 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.641062 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.641072 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.641083 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.641114 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.744505 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.744550 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.744564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.744580 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.744593 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.846962 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.847014 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.847032 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.847057 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.847075 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.949721 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.949792 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.949806 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.949824 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:16 crc kubenswrapper[4841]: I1204 09:20:16.949835 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:16Z","lastTransitionTime":"2025-12-04T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.052245 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.052305 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.052323 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.052347 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.052369 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.155499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.155575 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.155592 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.155623 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.155640 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.258271 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.258328 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.258345 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.258369 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.258386 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.360875 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.360943 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.360961 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.360991 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.361012 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.463953 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.464033 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.464056 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.464086 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.464113 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.566932 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.566997 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.567016 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.567041 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.567058 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.616682 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.616844 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:17 crc kubenswrapper[4841]: E1204 09:20:17.616898 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.616978 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:17 crc kubenswrapper[4841]: E1204 09:20:17.617259 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:17 crc kubenswrapper[4841]: E1204 09:20:17.617342 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.635994 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.669609 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.669657 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.669666 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.669680 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.669689 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.773120 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.773175 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.773194 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.773217 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.773234 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.876228 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.876329 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.876350 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.876375 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.876424 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.979570 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.979612 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.979623 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.979639 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:17 crc kubenswrapper[4841]: I1204 09:20:17.979651 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:17Z","lastTransitionTime":"2025-12-04T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.082401 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.082445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.082458 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.082479 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.082495 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.185507 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.185579 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.185601 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.185667 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.185691 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.288453 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.288508 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.288524 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.288547 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.288565 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.391088 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.391160 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.391187 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.391218 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.391240 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.494177 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.494236 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.494253 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.494278 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.494294 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.598053 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.598255 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.598277 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.598302 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.598425 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.616525 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:18 crc kubenswrapper[4841]: E1204 09:20:18.616887 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.701089 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.701151 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.701168 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.701191 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.701208 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.804380 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.804433 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.804446 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.804463 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.804475 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.907593 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.907651 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.907669 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.907694 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:18 crc kubenswrapper[4841]: I1204 09:20:18.907718 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:18Z","lastTransitionTime":"2025-12-04T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.010153 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.010210 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.010232 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.010259 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.010279 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.112980 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.113055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.113078 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.113105 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.113127 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.215963 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.216048 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.216073 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.216105 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.216129 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.319748 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.319843 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.319866 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.319895 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.319914 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.421990 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.422055 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.422077 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.422104 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.422126 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.525307 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.525373 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.525403 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.525430 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.525451 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.616237 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.616523 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.616547 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:19 crc kubenswrapper[4841]: E1204 09:20:19.616757 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:19 crc kubenswrapper[4841]: E1204 09:20:19.616929 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:19 crc kubenswrapper[4841]: E1204 09:20:19.617042 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.628143 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.628200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.628222 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.628248 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.628269 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.731332 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.731397 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.731420 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.731448 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.731469 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.834340 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.834409 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.834430 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.834454 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.834475 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.937089 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.937135 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.937147 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.937161 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:19 crc kubenswrapper[4841]: I1204 09:20:19.937172 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:19Z","lastTransitionTime":"2025-12-04T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.040731 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.040914 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.040932 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.040954 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.040970 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.144202 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.144256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.144274 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.144300 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.144319 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.246826 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.246886 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.246903 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.246930 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.246948 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.349787 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.349852 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.349869 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.349892 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.349908 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.453079 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.453256 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.453287 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.453317 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.453338 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.559082 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.559150 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.559200 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.559222 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.559270 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.616169 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:20 crc kubenswrapper[4841]: E1204 09:20:20.616298 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.662564 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.662601 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.662611 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.662624 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.662633 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.766022 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.766100 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.766124 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.766155 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.766178 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.869912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.870021 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.870039 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.870061 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.870080 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.972288 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.972315 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.972324 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.972337 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:20 crc kubenswrapper[4841]: I1204 09:20:20.972349 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:20Z","lastTransitionTime":"2025-12-04T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.075733 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.075912 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.075932 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.075957 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.075976 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.178975 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.179049 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.179066 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.179094 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.179114 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.281703 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.281788 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.281800 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.281814 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.281824 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.384732 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.384825 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.384840 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.384881 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.384892 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.488286 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.488350 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.488371 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.488394 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.488411 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.591916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.591986 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.592009 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.592034 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.592051 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.616613 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.616747 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.616747 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:21 crc kubenswrapper[4841]: E1204 09:20:21.616945 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:21 crc kubenswrapper[4841]: E1204 09:20:21.617294 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:21 crc kubenswrapper[4841]: E1204 09:20:21.619025 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.619715 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:20:21 crc kubenswrapper[4841]: E1204 09:20:21.619999 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.695219 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.695567 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.695706 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.695880 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.696019 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.697660 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.697642409 podStartE2EDuration="40.697642409s" podCreationTimestamp="2025-12-04 09:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.65270523 +0000 UTC m=+88.404495484" watchObservedRunningTime="2025-12-04 09:20:21.697642409 +0000 UTC m=+88.449432653" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.798794 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.798859 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.798873 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.798894 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.798906 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.814392 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.814375141 podStartE2EDuration="4.814375141s" podCreationTimestamp="2025-12-04 09:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.812633747 +0000 UTC m=+88.564423991" watchObservedRunningTime="2025-12-04 09:20:21.814375141 +0000 UTC m=+88.566165355" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.891610 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2lx6q" podStartSLOduration=69.891546436 podStartE2EDuration="1m9.891546436s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.891119974 +0000 UTC m=+88.642910238" watchObservedRunningTime="2025-12-04 09:20:21.891546436 +0000 UTC m=+88.643336680" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.893121 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-76xdk" podStartSLOduration=69.893104824 podStartE2EDuration="1m9.893104824s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.867889512 +0000 UTC m=+88.619679736" watchObservedRunningTime="2025-12-04 09:20:21.893104824 +0000 UTC m=+88.644895088" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.901406 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.901441 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.901456 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.901471 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.901482 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:21Z","lastTransitionTime":"2025-12-04T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.906301 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fmcq4" podStartSLOduration=69.906279369 podStartE2EDuration="1m9.906279369s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.9046939 +0000 UTC m=+88.656484154" watchObservedRunningTime="2025-12-04 09:20:21.906279369 +0000 UTC m=+88.658069603" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.923789 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.92374593 podStartE2EDuration="7.92374593s" podCreationTimestamp="2025-12-04 09:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.923305839 +0000 UTC m=+88.675096073" watchObservedRunningTime="2025-12-04 09:20:21.92374593 +0000 UTC m=+88.675536144" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.965475 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.965409248 podStartE2EDuration="1m10.965409248s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.945492527 +0000 UTC m=+88.697282741" watchObservedRunningTime="2025-12-04 09:20:21.965409248 +0000 UTC m=+88.717199492" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.966873 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.966858565 podStartE2EDuration="1m9.966858565s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.964509986 +0000 UTC m=+88.716300230" watchObservedRunningTime="2025-12-04 09:20:21.966858565 +0000 UTC m=+88.718648799" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.987177 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d5tkl" podStartSLOduration=70.98715588499999 podStartE2EDuration="1m10.987155885s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.98652279 +0000 UTC m=+88.738313004" watchObservedRunningTime="2025-12-04 09:20:21.987155885 +0000 UTC m=+88.738946089" Dec 04 09:20:21 crc kubenswrapper[4841]: I1204 09:20:21.998300 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podStartSLOduration=69.998243399 podStartE2EDuration="1m9.998243399s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:21.997729687 +0000 UTC m=+88.749519901" watchObservedRunningTime="2025-12-04 09:20:21.998243399 +0000 UTC m=+88.750033643" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.004350 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.004391 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.004402 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.004419 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.004431 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.009286 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-56vgt" podStartSLOduration=70.009273161 podStartE2EDuration="1m10.009273161s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:22.008798319 +0000 UTC m=+88.760588543" watchObservedRunningTime="2025-12-04 09:20:22.009273161 +0000 UTC m=+88.761063365" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.108133 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.108499 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.108705 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.108945 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.109297 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.211987 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.212065 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.212089 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.212118 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.212142 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.314605 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.314647 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.314663 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.314688 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.314706 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.416935 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.416992 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.417010 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.417077 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.417096 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.519842 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.519937 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.519956 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.519981 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.519998 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.588815 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.588916 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.589353 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.589445 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.589732 4841 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T09:20:22Z","lastTransitionTime":"2025-12-04T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.616298 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:22 crc kubenswrapper[4841]: E1204 09:20:22.616536 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.653321 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b"] Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.653850 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.656605 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.658269 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.658410 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.658549 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.731482 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.731871 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.732013 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c6462d-eab8-4c26-bc10-0d2c09a0f961-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.732092 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87c6462d-eab8-4c26-bc10-0d2c09a0f961-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.732164 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87c6462d-eab8-4c26-bc10-0d2c09a0f961-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.832849 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.832990 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.833038 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87c6462d-eab8-4c26-bc10-0d2c09a0f961-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.834476 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87c6462d-eab8-4c26-bc10-0d2c09a0f961-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.834597 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87c6462d-eab8-4c26-bc10-0d2c09a0f961-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.835105 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.835266 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c6462d-eab8-4c26-bc10-0d2c09a0f961-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.835187 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87c6462d-eab8-4c26-bc10-0d2c09a0f961-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.841367 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c6462d-eab8-4c26-bc10-0d2c09a0f961-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.857150 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87c6462d-eab8-4c26-bc10-0d2c09a0f961-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n2g8b\" (UID: \"87c6462d-eab8-4c26-bc10-0d2c09a0f961\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:22 crc kubenswrapper[4841]: I1204 09:20:22.974303 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" Dec 04 09:20:23 crc kubenswrapper[4841]: I1204 09:20:23.172212 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" event={"ID":"87c6462d-eab8-4c26-bc10-0d2c09a0f961","Type":"ContainerStarted","Data":"b2d8c5c880b4cfeb7213e2cea030257e36df20737850d8931c955856abc91fe2"} Dec 04 09:20:23 crc kubenswrapper[4841]: I1204 09:20:23.172655 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" event={"ID":"87c6462d-eab8-4c26-bc10-0d2c09a0f961","Type":"ContainerStarted","Data":"1c3b583eb271b538898c88d3c9fec192c890dc367eade5869f564678b7c2827b"} Dec 04 09:20:23 crc kubenswrapper[4841]: I1204 09:20:23.616121 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:23 crc kubenswrapper[4841]: I1204 09:20:23.616224 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:23 crc kubenswrapper[4841]: E1204 09:20:23.616283 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:23 crc kubenswrapper[4841]: E1204 09:20:23.616443 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:23 crc kubenswrapper[4841]: I1204 09:20:23.616866 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:23 crc kubenswrapper[4841]: E1204 09:20:23.617799 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:24 crc kubenswrapper[4841]: I1204 09:20:24.616164 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:24 crc kubenswrapper[4841]: E1204 09:20:24.616351 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:25 crc kubenswrapper[4841]: I1204 09:20:25.616453 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:25 crc kubenswrapper[4841]: I1204 09:20:25.616592 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:25 crc kubenswrapper[4841]: I1204 09:20:25.616625 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:25 crc kubenswrapper[4841]: E1204 09:20:25.623282 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:25 crc kubenswrapper[4841]: E1204 09:20:25.623806 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:25 crc kubenswrapper[4841]: E1204 09:20:25.624207 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:26 crc kubenswrapper[4841]: I1204 09:20:26.615882 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:26 crc kubenswrapper[4841]: E1204 09:20:26.616026 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:27 crc kubenswrapper[4841]: I1204 09:20:27.616533 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:27 crc kubenswrapper[4841]: I1204 09:20:27.616584 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:27 crc kubenswrapper[4841]: E1204 09:20:27.616713 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:27 crc kubenswrapper[4841]: I1204 09:20:27.616731 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:27 crc kubenswrapper[4841]: E1204 09:20:27.616894 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:27 crc kubenswrapper[4841]: E1204 09:20:27.617037 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:28 crc kubenswrapper[4841]: I1204 09:20:28.616508 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:28 crc kubenswrapper[4841]: E1204 09:20:28.616724 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.529140 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.530421 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.530669 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.616245 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.616313 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.616266 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.616404 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.616532 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.616631 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:29 crc kubenswrapper[4841]: I1204 09:20:29.706270 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.706457 4841 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:20:29 crc kubenswrapper[4841]: E1204 09:20:29.706540 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs podName:e74f87eb-fb70-4679-93f8-ebe5de564484 nodeName:}" failed. No retries permitted until 2025-12-04 09:21:33.706517141 +0000 UTC m=+160.458307385 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs") pod "network-metrics-daemon-7t7hn" (UID: "e74f87eb-fb70-4679-93f8-ebe5de564484") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 09:20:30 crc kubenswrapper[4841]: I1204 09:20:30.615743 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:30 crc kubenswrapper[4841]: E1204 09:20:30.615938 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:31 crc kubenswrapper[4841]: I1204 09:20:31.616143 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:31 crc kubenswrapper[4841]: I1204 09:20:31.616306 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:31 crc kubenswrapper[4841]: E1204 09:20:31.616383 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:31 crc kubenswrapper[4841]: I1204 09:20:31.616178 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:31 crc kubenswrapper[4841]: E1204 09:20:31.616700 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:31 crc kubenswrapper[4841]: E1204 09:20:31.616873 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:32 crc kubenswrapper[4841]: I1204 09:20:32.616364 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:32 crc kubenswrapper[4841]: E1204 09:20:32.616620 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:33 crc kubenswrapper[4841]: I1204 09:20:33.616549 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:33 crc kubenswrapper[4841]: I1204 09:20:33.616640 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:33 crc kubenswrapper[4841]: E1204 09:20:33.617688 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:33 crc kubenswrapper[4841]: I1204 09:20:33.617718 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:33 crc kubenswrapper[4841]: E1204 09:20:33.617978 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:33 crc kubenswrapper[4841]: E1204 09:20:33.618051 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:34 crc kubenswrapper[4841]: I1204 09:20:34.616082 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:34 crc kubenswrapper[4841]: E1204 09:20:34.616277 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:35 crc kubenswrapper[4841]: I1204 09:20:35.615915 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:35 crc kubenswrapper[4841]: E1204 09:20:35.616168 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:35 crc kubenswrapper[4841]: I1204 09:20:35.616245 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:35 crc kubenswrapper[4841]: I1204 09:20:35.616361 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:35 crc kubenswrapper[4841]: E1204 09:20:35.616512 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:35 crc kubenswrapper[4841]: E1204 09:20:35.616890 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:36 crc kubenswrapper[4841]: I1204 09:20:36.616455 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:36 crc kubenswrapper[4841]: E1204 09:20:36.617173 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:37 crc kubenswrapper[4841]: I1204 09:20:37.616492 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:37 crc kubenswrapper[4841]: I1204 09:20:37.616620 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:37 crc kubenswrapper[4841]: E1204 09:20:37.616931 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:37 crc kubenswrapper[4841]: I1204 09:20:37.616798 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:37 crc kubenswrapper[4841]: E1204 09:20:37.617661 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:37 crc kubenswrapper[4841]: E1204 09:20:37.617059 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:38 crc kubenswrapper[4841]: I1204 09:20:38.615920 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:38 crc kubenswrapper[4841]: E1204 09:20:38.616658 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:39 crc kubenswrapper[4841]: I1204 09:20:39.615832 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:39 crc kubenswrapper[4841]: I1204 09:20:39.615901 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:39 crc kubenswrapper[4841]: E1204 09:20:39.616038 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:39 crc kubenswrapper[4841]: I1204 09:20:39.616095 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:39 crc kubenswrapper[4841]: E1204 09:20:39.616240 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:39 crc kubenswrapper[4841]: E1204 09:20:39.616503 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:40 crc kubenswrapper[4841]: I1204 09:20:40.616591 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:40 crc kubenswrapper[4841]: E1204 09:20:40.616871 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:41 crc kubenswrapper[4841]: I1204 09:20:41.616716 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:41 crc kubenswrapper[4841]: I1204 09:20:41.616835 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:41 crc kubenswrapper[4841]: E1204 09:20:41.616840 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:41 crc kubenswrapper[4841]: I1204 09:20:41.616935 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:41 crc kubenswrapper[4841]: E1204 09:20:41.617054 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:41 crc kubenswrapper[4841]: E1204 09:20:41.617151 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:42 crc kubenswrapper[4841]: I1204 09:20:42.616798 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:42 crc kubenswrapper[4841]: E1204 09:20:42.617151 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:43 crc kubenswrapper[4841]: I1204 09:20:43.762517 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:43 crc kubenswrapper[4841]: I1204 09:20:43.762585 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:43 crc kubenswrapper[4841]: I1204 09:20:43.762682 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:43 crc kubenswrapper[4841]: E1204 09:20:43.763414 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:43 crc kubenswrapper[4841]: E1204 09:20:43.763540 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:43 crc kubenswrapper[4841]: E1204 09:20:43.763660 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:43 crc kubenswrapper[4841]: I1204 09:20:43.764397 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:20:43 crc kubenswrapper[4841]: E1204 09:20:43.764609 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hhkwl_openshift-ovn-kubernetes(c56a9daa-a941-4d89-abd0-b7f0472ee869)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.616213 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:44 crc kubenswrapper[4841]: E1204 09:20:44.616387 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.771010 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/1.log" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.771547 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/0.log" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.771584 4841 generic.go:334] "Generic (PLEG): container finished" podID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" containerID="d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66" exitCode=1 Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.771608 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerDied","Data":"d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66"} Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.771636 4841 scope.go:117] "RemoveContainer" containerID="6d1a891dadfaf779de1a7bfd82d7b2f890c1a1753bf3121e299c5777b3faf6c0" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.772167 4841 scope.go:117] "RemoveContainer" containerID="d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66" Dec 04 09:20:44 crc kubenswrapper[4841]: E1204 09:20:44.772389 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-76xdk_openshift-multus(86bfe6c3-d06e-40b1-9801-74abeb07ae15)\"" pod="openshift-multus/multus-76xdk" podUID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" Dec 04 09:20:44 crc kubenswrapper[4841]: I1204 09:20:44.800119 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n2g8b" podStartSLOduration=93.800104844 podStartE2EDuration="1m33.800104844s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:23.194724811 +0000 UTC m=+89.946515035" watchObservedRunningTime="2025-12-04 09:20:44.800104844 +0000 UTC m=+111.551895048" Dec 04 09:20:45 crc kubenswrapper[4841]: I1204 09:20:45.616248 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:45 crc kubenswrapper[4841]: I1204 09:20:45.616336 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:45 crc kubenswrapper[4841]: I1204 09:20:45.616254 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:45 crc kubenswrapper[4841]: E1204 09:20:45.616421 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:45 crc kubenswrapper[4841]: E1204 09:20:45.616545 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:45 crc kubenswrapper[4841]: E1204 09:20:45.616646 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:45 crc kubenswrapper[4841]: I1204 09:20:45.776415 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/1.log" Dec 04 09:20:46 crc kubenswrapper[4841]: I1204 09:20:46.615954 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:46 crc kubenswrapper[4841]: E1204 09:20:46.616197 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:47 crc kubenswrapper[4841]: I1204 09:20:47.616069 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:47 crc kubenswrapper[4841]: I1204 09:20:47.616144 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:47 crc kubenswrapper[4841]: E1204 09:20:47.616303 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:47 crc kubenswrapper[4841]: I1204 09:20:47.616413 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:47 crc kubenswrapper[4841]: E1204 09:20:47.616587 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:47 crc kubenswrapper[4841]: E1204 09:20:47.616692 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:48 crc kubenswrapper[4841]: I1204 09:20:48.616631 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:48 crc kubenswrapper[4841]: E1204 09:20:48.616879 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:49 crc kubenswrapper[4841]: I1204 09:20:49.616355 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:49 crc kubenswrapper[4841]: I1204 09:20:49.616443 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:49 crc kubenswrapper[4841]: E1204 09:20:49.616539 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:49 crc kubenswrapper[4841]: I1204 09:20:49.616625 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:49 crc kubenswrapper[4841]: E1204 09:20:49.616876 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:49 crc kubenswrapper[4841]: E1204 09:20:49.617023 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:50 crc kubenswrapper[4841]: I1204 09:20:50.616660 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:50 crc kubenswrapper[4841]: E1204 09:20:50.617919 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:51 crc kubenswrapper[4841]: I1204 09:20:51.615688 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:51 crc kubenswrapper[4841]: I1204 09:20:51.615743 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:51 crc kubenswrapper[4841]: E1204 09:20:51.615877 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:51 crc kubenswrapper[4841]: I1204 09:20:51.615903 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:51 crc kubenswrapper[4841]: E1204 09:20:51.616054 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:51 crc kubenswrapper[4841]: E1204 09:20:51.616213 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:52 crc kubenswrapper[4841]: I1204 09:20:52.615959 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:52 crc kubenswrapper[4841]: E1204 09:20:52.616150 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:53 crc kubenswrapper[4841]: E1204 09:20:53.570590 4841 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 04 09:20:53 crc kubenswrapper[4841]: I1204 09:20:53.616045 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:53 crc kubenswrapper[4841]: I1204 09:20:53.616125 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:53 crc kubenswrapper[4841]: I1204 09:20:53.616940 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:53 crc kubenswrapper[4841]: E1204 09:20:53.616936 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:53 crc kubenswrapper[4841]: E1204 09:20:53.617046 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:53 crc kubenswrapper[4841]: E1204 09:20:53.617223 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:53 crc kubenswrapper[4841]: E1204 09:20:53.765615 4841 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 09:20:54 crc kubenswrapper[4841]: I1204 09:20:54.615733 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:54 crc kubenswrapper[4841]: E1204 09:20:54.615917 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.616538 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.616613 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:55 crc kubenswrapper[4841]: E1204 09:20:55.616733 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.616863 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:55 crc kubenswrapper[4841]: E1204 09:20:55.617033 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:55 crc kubenswrapper[4841]: E1204 09:20:55.617235 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.618312 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.816981 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/3.log" Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.820241 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerStarted","Data":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} Dec 04 09:20:55 crc kubenswrapper[4841]: I1204 09:20:55.820717 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:20:56 crc kubenswrapper[4841]: I1204 09:20:56.530802 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podStartSLOduration=104.530750812 podStartE2EDuration="1m44.530750812s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:20:55.863072691 +0000 UTC m=+122.614862965" watchObservedRunningTime="2025-12-04 09:20:56.530750812 +0000 UTC m=+123.282541036" Dec 04 09:20:56 crc kubenswrapper[4841]: I1204 09:20:56.531305 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7t7hn"] Dec 04 09:20:56 crc kubenswrapper[4841]: I1204 09:20:56.531440 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:56 crc kubenswrapper[4841]: E1204 09:20:56.531547 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:56 crc kubenswrapper[4841]: I1204 09:20:56.616347 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:56 crc kubenswrapper[4841]: E1204 09:20:56.616456 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:57 crc kubenswrapper[4841]: I1204 09:20:57.616253 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:57 crc kubenswrapper[4841]: I1204 09:20:57.616351 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:57 crc kubenswrapper[4841]: E1204 09:20:57.616832 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:57 crc kubenswrapper[4841]: I1204 09:20:57.616467 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:57 crc kubenswrapper[4841]: E1204 09:20:57.617011 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:57 crc kubenswrapper[4841]: E1204 09:20:57.617176 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:20:58 crc kubenswrapper[4841]: I1204 09:20:58.615739 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:20:58 crc kubenswrapper[4841]: E1204 09:20:58.615953 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:20:58 crc kubenswrapper[4841]: E1204 09:20:58.767505 4841 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 09:20:59 crc kubenswrapper[4841]: I1204 09:20:59.616214 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:20:59 crc kubenswrapper[4841]: I1204 09:20:59.616926 4841 scope.go:117] "RemoveContainer" containerID="d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66" Dec 04 09:20:59 crc kubenswrapper[4841]: E1204 09:20:59.616978 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:20:59 crc kubenswrapper[4841]: I1204 09:20:59.617024 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:20:59 crc kubenswrapper[4841]: I1204 09:20:59.617114 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:20:59 crc kubenswrapper[4841]: E1204 09:20:59.617241 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:20:59 crc kubenswrapper[4841]: E1204 09:20:59.617381 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:21:00 crc kubenswrapper[4841]: I1204 09:21:00.615833 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:00 crc kubenswrapper[4841]: E1204 09:21:00.616049 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:21:00 crc kubenswrapper[4841]: I1204 09:21:00.840252 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/1.log" Dec 04 09:21:00 crc kubenswrapper[4841]: I1204 09:21:00.840358 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerStarted","Data":"f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db"} Dec 04 09:21:01 crc kubenswrapper[4841]: I1204 09:21:01.616951 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:01 crc kubenswrapper[4841]: I1204 09:21:01.617106 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:01 crc kubenswrapper[4841]: E1204 09:21:01.617280 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:21:01 crc kubenswrapper[4841]: I1204 09:21:01.617459 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:01 crc kubenswrapper[4841]: E1204 09:21:01.617626 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:21:01 crc kubenswrapper[4841]: E1204 09:21:01.617994 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:21:02 crc kubenswrapper[4841]: I1204 09:21:02.616053 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:02 crc kubenswrapper[4841]: E1204 09:21:02.616450 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 09:21:03 crc kubenswrapper[4841]: I1204 09:21:03.616260 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:03 crc kubenswrapper[4841]: I1204 09:21:03.616269 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:03 crc kubenswrapper[4841]: E1204 09:21:03.619011 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7t7hn" podUID="e74f87eb-fb70-4679-93f8-ebe5de564484" Dec 04 09:21:03 crc kubenswrapper[4841]: I1204 09:21:03.619044 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:03 crc kubenswrapper[4841]: E1204 09:21:03.619190 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 09:21:03 crc kubenswrapper[4841]: E1204 09:21:03.619367 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 09:21:04 crc kubenswrapper[4841]: I1204 09:21:04.615817 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:04 crc kubenswrapper[4841]: I1204 09:21:04.619322 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 09:21:04 crc kubenswrapper[4841]: I1204 09:21:04.619485 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.617276 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.617439 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.618203 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.621543 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.621650 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.621748 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 09:21:05 crc kubenswrapper[4841]: I1204 09:21:05.622005 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.844389 4841 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.905873 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8q2hw"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.906632 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.910003 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.910556 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.911450 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.920743 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.921350 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.921392 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.921432 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.937317 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.946502 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.947156 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.947372 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.947490 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.948268 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.948517 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.937574 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w24pz"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949108 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949197 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949337 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949586 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-h8dcv"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949931 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949966 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.949970 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.950000 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.950529 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-42c5d"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.951314 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.952200 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.952615 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.953853 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.954346 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.954659 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmlf5"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.955161 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.958293 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nwkf"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.958916 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.959278 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.959630 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.961505 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.961654 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.961779 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.967661 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.968439 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.969036 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.970012 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.970582 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-prd7v"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.971415 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972122 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972269 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972350 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972391 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972456 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972500 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972526 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972547 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972594 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972659 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972688 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972722 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972801 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972824 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972875 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972924 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.972939 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973039 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973131 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973203 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973220 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973275 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973288 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973346 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973407 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973447 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973456 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973484 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973504 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973525 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973549 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973593 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973649 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973668 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973687 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973743 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973795 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973595 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973823 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973466 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973888 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973936 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.973953 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.974015 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.974067 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.974802 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq8vh"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.975177 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.975608 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.975658 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.976963 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977160 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977313 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977356 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977520 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977627 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.977659 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.980423 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.980697 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.981244 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.981626 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.982972 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.983063 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.983131 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.983200 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6"] Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.998380 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 09:21:13 crc kubenswrapper[4841]: I1204 09:21:13.999538 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.000378 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.000545 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.000843 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.000963 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.001228 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.002416 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004421 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004535 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c255032-be85-465e-97e8-e0d47337c099-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004594 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb9caa10-a6d6-4277-b42d-aab13376f4d2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004634 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004654 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004720 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-service-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004777 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9caa10-a6d6-4277-b42d-aab13376f4d2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004806 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b539de7-f02b-4f85-b40d-673095e3d1f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004833 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnzd\" (UniqueName: \"kubernetes.io/projected/fb9caa10-a6d6-4277-b42d-aab13376f4d2-kube-api-access-6xnzd\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004859 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4mm\" (UniqueName: \"kubernetes.io/projected/1c255032-be85-465e-97e8-e0d47337c099-kube-api-access-sd4mm\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004922 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5xl\" (UniqueName: \"kubernetes.io/projected/4b539de7-f02b-4f85-b40d-673095e3d1f9-kube-api-access-lh5xl\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004949 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-trusted-ca\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.004986 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-config\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.005031 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-config\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.005065 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c255032-be85-465e-97e8-e0d47337c099-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.005107 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-etcd-client\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.005137 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-serving-cert\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.005210 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtvq\" (UniqueName: \"kubernetes.io/projected/13aab721-2ebe-42c3-b749-bcba3b51a71b-kube-api-access-tmtvq\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.007043 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.007266 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.007567 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.011641 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.012666 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13aab721-2ebe-42c3-b749-bcba3b51a71b-serving-cert\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.017440 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.032030 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.032165 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.032622 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.035696 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.037956 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.038995 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.039707 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.043589 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.044803 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.045415 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfscp"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.046310 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.046735 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.047742 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.047989 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048186 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048373 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048509 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048632 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048657 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048732 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.048930 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.049793 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050087 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050201 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050398 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050561 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050713 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.050844 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.051406 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.051723 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.052068 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.013633 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrjx\" (UniqueName: \"kubernetes.io/projected/4be459f8-8d18-441f-ae50-b56bdcf9367a-kube-api-access-jxrjx\") pod \"downloads-7954f5f757-h8dcv\" (UID: \"4be459f8-8d18-441f-ae50-b56bdcf9367a\") " pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.056507 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99m5\" (UniqueName: \"kubernetes.io/projected/c8845e08-1385-496a-b4e7-90151305bac3-kube-api-access-w99m5\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.062067 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.062365 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.062865 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.063651 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.068204 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.070977 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.073432 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.073750 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.082984 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.083271 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.086161 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.086562 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.086830 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.086917 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ql5vn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.087248 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.087523 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xjqtj"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.087900 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-drgr2"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.088212 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.088249 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.088228 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hf9vt"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.088437 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.088562 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.089232 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.089477 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.090016 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.090043 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.090774 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8q2hw"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.091944 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.093095 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w24pz"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.094207 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.095282 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq8vh"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.096337 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.097319 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.098362 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ngpxk"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.099152 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.099566 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmlf5"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.100487 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.102041 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.103871 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.103906 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-prd7v"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.106449 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfscp"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.107862 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.110519 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.111221 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.111550 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.112521 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.113503 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.117610 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.117640 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.119156 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h8dcv"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.121407 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nwkf"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.122701 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.124243 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jv68h"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.125148 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.126577 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.126684 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8v24d"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.127153 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.128129 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hf9vt"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.128804 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.129172 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.130551 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.131740 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.132730 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-42c5d"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.133722 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.139176 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.139244 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.141153 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-drgr2"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.142104 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.145018 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv68h"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.146474 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.147818 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ql5vn"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.149405 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8v24d"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.150810 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fhg4j"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.151859 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.152246 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fhg4j"] Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.155948 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.156922 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9caa10-a6d6-4277-b42d-aab13376f4d2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.156949 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.156970 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b539de7-f02b-4f85-b40d-673095e3d1f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.156987 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45203b92-de55-44d6-a0f8-c2559b74d65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157004 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff4059e-7093-443d-9cc1-c24a2f7e912d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157023 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-policies\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157039 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157058 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157123 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xnzd\" (UniqueName: \"kubernetes.io/projected/fb9caa10-a6d6-4277-b42d-aab13376f4d2-kube-api-access-6xnzd\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157441 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd4mm\" (UniqueName: \"kubernetes.io/projected/1c255032-be85-465e-97e8-e0d47337c099-kube-api-access-sd4mm\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157537 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-oauth-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157579 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sb5w\" (UniqueName: \"kubernetes.io/projected/87a01003-7343-4fba-ada1-2be090ebc0dd-kube-api-access-8sb5w\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157597 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9caa10-a6d6-4277-b42d-aab13376f4d2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157643 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-dir\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157718 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45203b92-de55-44d6-a0f8-c2559b74d65d-config\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157746 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5xl\" (UniqueName: \"kubernetes.io/projected/4b539de7-f02b-4f85-b40d-673095e3d1f9-kube-api-access-lh5xl\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157803 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-service-ca\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157867 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ftv\" (UniqueName: \"kubernetes.io/projected/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-kube-api-access-28ftv\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157898 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157920 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.157960 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-trusted-ca-bundle\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158004 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158020 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158039 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45203b92-de55-44d6-a0f8-c2559b74d65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158063 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-console-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158086 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-trusted-ca\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158103 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-config\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158121 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158139 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-client\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158157 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158174 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158216 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-config\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158239 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c255032-be85-465e-97e8-e0d47337c099-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158258 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5b7\" (UniqueName: \"kubernetes.io/projected/3d4aee6f-c06c-452c-ad42-b6079e00f178-kube-api-access-rt5b7\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158275 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6p6f\" (UniqueName: \"kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158296 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr9d2\" (UniqueName: \"kubernetes.io/projected/fa89e0be-c464-47b6-b682-eea7a3d34bef-kube-api-access-lr9d2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158317 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-etcd-client\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158335 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2kg\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-kube-api-access-hg2kg\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158356 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-serving-cert\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158371 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158402 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-oauth-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158422 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3d4aee6f-c06c-452c-ad42-b6079e00f178-machine-approver-tls\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158438 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-images\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158464 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtvq\" (UniqueName: \"kubernetes.io/projected/13aab721-2ebe-42c3-b749-bcba3b51a71b-kube-api-access-tmtvq\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158481 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvmm8\" (UniqueName: \"kubernetes.io/projected/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-kube-api-access-xvmm8\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158498 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158513 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4w7\" (UniqueName: \"kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158534 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-config\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158551 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158575 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158603 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13aab721-2ebe-42c3-b749-bcba3b51a71b-serving-cert\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158620 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158639 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158657 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158676 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjht\" (UniqueName: \"kubernetes.io/projected/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-kube-api-access-xqjht\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158696 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrjx\" (UniqueName: \"kubernetes.io/projected/4be459f8-8d18-441f-ae50-b56bdcf9367a-kube-api-access-jxrjx\") pod \"downloads-7954f5f757-h8dcv\" (UID: \"4be459f8-8d18-441f-ae50-b56bdcf9367a\") " pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158711 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158728 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158745 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-encryption-config\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158782 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158804 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158825 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w99m5\" (UniqueName: \"kubernetes.io/projected/c8845e08-1385-496a-b4e7-90151305bac3-kube-api-access-w99m5\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158841 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-auth-proxy-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158857 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c255032-be85-465e-97e8-e0d47337c099-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158876 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ff4059e-7093-443d-9cc1-c24a2f7e912d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158894 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa89e0be-c464-47b6-b682-eea7a3d34bef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158911 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158932 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb9caa10-a6d6-4277-b42d-aab13376f4d2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158952 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158969 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158986 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa89e0be-c464-47b6-b682-eea7a3d34bef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.158990 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-trusted-ca\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159001 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159002 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-config\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159038 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159159 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-service-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159180 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-serving-cert\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159736 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.159951 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13aab721-2ebe-42c3-b749-bcba3b51a71b-config\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.160288 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8845e08-1385-496a-b4e7-90151305bac3-etcd-service-ca\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.162478 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-etcd-client\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.162561 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b539de7-f02b-4f85-b40d-673095e3d1f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.163071 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13aab721-2ebe-42c3-b749-bcba3b51a71b-serving-cert\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.163363 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c255032-be85-465e-97e8-e0d47337c099-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.163751 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8845e08-1385-496a-b4e7-90151305bac3-serving-cert\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.169259 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.175872 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb9caa10-a6d6-4277-b42d-aab13376f4d2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.188918 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.208607 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.228801 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.248525 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261619 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261652 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-client\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261671 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261688 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261713 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt5b7\" (UniqueName: \"kubernetes.io/projected/3d4aee6f-c06c-452c-ad42-b6079e00f178-kube-api-access-rt5b7\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261732 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6p6f\" (UniqueName: \"kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261747 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr9d2\" (UniqueName: \"kubernetes.io/projected/fa89e0be-c464-47b6-b682-eea7a3d34bef-kube-api-access-lr9d2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261781 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2kg\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-kube-api-access-hg2kg\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261797 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261820 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-oauth-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261837 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3d4aee6f-c06c-452c-ad42-b6079e00f178-machine-approver-tls\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261854 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-images\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261887 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvmm8\" (UniqueName: \"kubernetes.io/projected/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-kube-api-access-xvmm8\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261904 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261918 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h4w7\" (UniqueName: \"kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261933 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-config\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261946 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261961 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261976 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.261994 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262008 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262026 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjht\" (UniqueName: \"kubernetes.io/projected/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-kube-api-access-xqjht\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262049 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262064 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262080 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-encryption-config\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262095 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262112 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262132 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-auth-proxy-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262155 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ff4059e-7093-443d-9cc1-c24a2f7e912d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262170 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa89e0be-c464-47b6-b682-eea7a3d34bef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262185 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262202 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262219 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262235 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa89e0be-c464-47b6-b682-eea7a3d34bef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262251 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262270 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-serving-cert\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262290 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262319 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45203b92-de55-44d6-a0f8-c2559b74d65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262335 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff4059e-7093-443d-9cc1-c24a2f7e912d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262354 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-policies\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262372 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262390 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262416 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-oauth-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262431 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sb5w\" (UniqueName: \"kubernetes.io/projected/87a01003-7343-4fba-ada1-2be090ebc0dd-kube-api-access-8sb5w\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262456 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-dir\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262471 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45203b92-de55-44d6-a0f8-c2559b74d65d-config\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262492 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-service-ca\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262508 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ftv\" (UniqueName: \"kubernetes.io/projected/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-kube-api-access-28ftv\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262523 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262540 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262562 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-trusted-ca-bundle\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262578 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262592 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262607 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45203b92-de55-44d6-a0f8-c2559b74d65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.262621 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-console-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.263272 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-console-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.263620 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.264736 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.264892 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d4aee6f-c06c-452c-ad42-b6079e00f178-auth-proxy-config\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.265698 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.266073 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.266386 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.266467 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-dir\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.267158 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-images\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.267548 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.267784 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.268003 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-config\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.268297 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.268384 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-oauth-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.268658 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.268976 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ff4059e-7093-443d-9cc1-c24a2f7e912d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.269815 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.269841 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.271461 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-service-ca\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.271668 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.271683 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.272219 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.272299 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-etcd-client\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.272633 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-audit-policies\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.272847 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.273010 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a01003-7343-4fba-ada1-2be090ebc0dd-trusted-ca-bundle\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.273625 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.273974 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3d4aee6f-c06c-452c-ad42-b6079e00f178-machine-approver-tls\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.274171 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.274616 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff4059e-7093-443d-9cc1-c24a2f7e912d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.275302 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.275945 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.276416 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.276453 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-encryption-config\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.276844 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.276991 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.277718 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.277825 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.278996 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-serving-cert\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.292758 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.309061 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.330136 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.349081 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.368751 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.388062 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.409979 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.449621 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.452231 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.469302 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.489570 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.509307 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.528096 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.548705 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.569350 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.573962 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c255032-be85-465e-97e8-e0d47337c099-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.577935 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-serving-cert\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.578929 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87a01003-7343-4fba-ada1-2be090ebc0dd-console-oauth-config\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.589519 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.600371 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa89e0be-c464-47b6-b682-eea7a3d34bef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.610173 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.617703 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa89e0be-c464-47b6-b682-eea7a3d34bef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.629684 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.648719 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.669027 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.688900 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.703488 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45203b92-de55-44d6-a0f8-c2559b74d65d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.708967 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.730285 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.749690 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.757923 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45203b92-de55-44d6-a0f8-c2559b74d65d-config\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.768728 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.789033 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.809757 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.829836 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.849558 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.869694 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.888683 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.909603 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.929546 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.950041 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 09:21:14 crc kubenswrapper[4841]: I1204 09:21:14.989312 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.011188 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.040283 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.050379 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.067816 4841 request.go:700] Waited for 1.015482388s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.070855 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.109950 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.129408 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.150207 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.180517 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.190530 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.210283 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.229611 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.248618 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.269549 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.290158 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.308965 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.329207 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.349330 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.369636 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.389532 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.409811 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.430083 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.449445 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.470471 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.489600 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.508672 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.529404 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.549988 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.570284 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.597572 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.609912 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.629619 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.649124 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.669346 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.689552 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.709474 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.728847 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.749938 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.769938 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.789569 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.809401 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.830052 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.849386 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.869649 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.888537 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.908835 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.929362 4841 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.950357 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 09:21:15 crc kubenswrapper[4841]: I1204 09:21:15.969508 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.020197 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xnzd\" (UniqueName: \"kubernetes.io/projected/fb9caa10-a6d6-4277-b42d-aab13376f4d2-kube-api-access-6xnzd\") pod \"openshift-apiserver-operator-796bbdcf4f-jf82v\" (UID: \"fb9caa10-a6d6-4277-b42d-aab13376f4d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.040895 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd4mm\" (UniqueName: \"kubernetes.io/projected/1c255032-be85-465e-97e8-e0d47337c099-kube-api-access-sd4mm\") pod \"openshift-controller-manager-operator-756b6f6bc6-24xfh\" (UID: \"1c255032-be85-465e-97e8-e0d47337c099\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.059586 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5xl\" (UniqueName: \"kubernetes.io/projected/4b539de7-f02b-4f85-b40d-673095e3d1f9-kube-api-access-lh5xl\") pod \"cluster-samples-operator-665b6dd947-m8m9l\" (UID: \"4b539de7-f02b-4f85-b40d-673095e3d1f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.069275 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.079856 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.084929 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtvq\" (UniqueName: \"kubernetes.io/projected/13aab721-2ebe-42c3-b749-bcba3b51a71b-kube-api-access-tmtvq\") pod \"console-operator-58897d9998-w24pz\" (UID: \"13aab721-2ebe-42c3-b749-bcba3b51a71b\") " pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.087426 4841 request.go:700] Waited for 1.928155s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.100991 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrjx\" (UniqueName: \"kubernetes.io/projected/4be459f8-8d18-441f-ae50-b56bdcf9367a-kube-api-access-jxrjx\") pod \"downloads-7954f5f757-h8dcv\" (UID: \"4be459f8-8d18-441f-ae50-b56bdcf9367a\") " pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.112006 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.116330 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w99m5\" (UniqueName: \"kubernetes.io/projected/c8845e08-1385-496a-b4e7-90151305bac3-kube-api-access-w99m5\") pod \"etcd-operator-b45778765-8q2hw\" (UID: \"c8845e08-1385-496a-b4e7-90151305bac3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.125635 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h4w7\" (UniqueName: \"kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7\") pod \"oauth-openshift-558db77b4-qmlf5\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.142555 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ftv\" (UniqueName: \"kubernetes.io/projected/6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b-kube-api-access-28ftv\") pod \"machine-api-operator-5694c8668f-42c5d\" (UID: \"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.144315 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.156201 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.176257 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.176886 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvmm8\" (UniqueName: \"kubernetes.io/projected/bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d-kube-api-access-xvmm8\") pod \"apiserver-7bbb656c7d-h7sxn\" (UID: \"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.191274 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.191956 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.202191 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.219352 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6p6f\" (UniqueName: \"kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f\") pod \"controller-manager-879f6c89f-m9mhm\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.228100 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sb5w\" (UniqueName: \"kubernetes.io/projected/87a01003-7343-4fba-ada1-2be090ebc0dd-kube-api-access-8sb5w\") pod \"console-f9d7485db-prd7v\" (UID: \"87a01003-7343-4fba-ada1-2be090ebc0dd\") " pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.247084 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.255101 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt5b7\" (UniqueName: \"kubernetes.io/projected/3d4aee6f-c06c-452c-ad42-b6079e00f178-kube-api-access-rt5b7\") pod \"machine-approver-56656f9798-ppqhq\" (UID: \"3d4aee6f-c06c-452c-ad42-b6079e00f178\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.267344 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr9d2\" (UniqueName: \"kubernetes.io/projected/fa89e0be-c464-47b6-b682-eea7a3d34bef-kube-api-access-lr9d2\") pod \"kube-storage-version-migrator-operator-b67b599dd-vv79b\" (UID: \"fa89e0be-c464-47b6-b682-eea7a3d34bef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.298580 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjht\" (UniqueName: \"kubernetes.io/projected/88a93a4e-e527-443f-a5d9-fdd5dca46c1b-kube-api-access-xqjht\") pod \"openshift-config-operator-7777fb866f-tb8s6\" (UID: \"88a93a4e-e527-443f-a5d9-fdd5dca46c1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.298824 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.313536 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45203b92-de55-44d6-a0f8-c2559b74d65d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7pssc\" (UID: \"45203b92-de55-44d6-a0f8-c2559b74d65d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.325911 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2kg\" (UniqueName: \"kubernetes.io/projected/2ff4059e-7093-443d-9cc1-c24a2f7e912d-kube-api-access-hg2kg\") pod \"cluster-image-registry-operator-dc59b4c8b-dt9xr\" (UID: \"2ff4059e-7093-443d-9cc1-c24a2f7e912d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.326160 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.339302 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.362885 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.365810 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407234 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407281 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407304 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b69851e-84ff-4014-8abe-5d28b0180416-serving-cert\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407326 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407351 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwzd\" (UniqueName: \"kubernetes.io/projected/abeb1c4f-7981-43be-b8f8-4df803599a4d-kube-api-access-wfwzd\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407380 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-config\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407401 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407421 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97424a80-37c4-4737-acac-de2182271f8d-proxy-tls\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407445 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407469 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bef2657-6cef-438b-965b-ad22a37457d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407497 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407525 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxzj\" (UniqueName: \"kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407547 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-encryption-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407562 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9cc\" (UniqueName: \"kubernetes.io/projected/0b69851e-84ff-4014-8abe-5d28b0180416-kube-api-access-tb9cc\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407580 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit-dir\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407594 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407614 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407636 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-image-import-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407714 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-client\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407746 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-node-pullsecrets\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407819 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk49c\" (UniqueName: \"kubernetes.io/projected/2c594333-c21f-49ee-9519-f86dc5bca7f1-kube-api-access-vk49c\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.407904 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.408153 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqw69\" (UniqueName: \"kubernetes.io/projected/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-kube-api-access-hqw69\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.408258 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:16.908222205 +0000 UTC m=+143.660012409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.408284 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.408324 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.411653 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.411693 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c594333-c21f-49ee-9519-f86dc5bca7f1-metrics-tls\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412104 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lck4j\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412165 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412218 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9fjt\" (UniqueName: \"kubernetes.io/projected/97424a80-37c4-4737-acac-de2182271f8d-kube-api-access-d9fjt\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412247 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412305 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412378 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412429 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2scj2\" (UniqueName: \"kubernetes.io/projected/115e6f25-1465-48f0-bfde-c6d51d98b2c5-kube-api-access-2scj2\") pod \"migrator-59844c95c7-7vbcr\" (UID: \"115e6f25-1465-48f0-bfde-c6d51d98b2c5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412466 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-profile-collector-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412517 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ttv\" (UniqueName: \"kubernetes.io/projected/205da08e-f36b-4980-b220-8d7f456d1863-kube-api-access-j4ttv\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412599 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412625 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bef2657-6cef-438b-965b-ad22a37457d0-config\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412695 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-images\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412751 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-srv-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.412823 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bef2657-6cef-438b-965b-ad22a37457d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.415064 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.415118 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.415140 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.415177 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.415198 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-serving-cert\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.486825 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" Dec 04 09:21:16 crc kubenswrapper[4841]: W1204 09:21:16.505952 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4aee6f_c06c_452c_ad42_b6079e00f178.slice/crio-764f3d6304750738761133352343cab070717ce9c6c28d67159d6c7e371a390f WatchSource:0}: Error finding container 764f3d6304750738761133352343cab070717ce9c6c28d67159d6c7e371a390f: Status 404 returned error can't find the container with id 764f3d6304750738761133352343cab070717ce9c6c28d67159d6c7e371a390f Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.515956 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516212 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305da833-682f-4eac-a135-e06598b0a179-config-volume\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516257 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516287 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516303 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516360 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2scj2\" (UniqueName: \"kubernetes.io/projected/115e6f25-1465-48f0-bfde-c6d51d98b2c5-kube-api-access-2scj2\") pod \"migrator-59844c95c7-7vbcr\" (UID: \"115e6f25-1465-48f0-bfde-c6d51d98b2c5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516386 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-profile-collector-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516422 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4ttv\" (UniqueName: \"kubernetes.io/projected/205da08e-f36b-4980-b220-8d7f456d1863-kube-api-access-j4ttv\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516439 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4bv\" (UniqueName: \"kubernetes.io/projected/c9962d63-494b-466a-a0f4-feeaf089b3cc-kube-api-access-dz4bv\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516464 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516501 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bef2657-6cef-438b-965b-ad22a37457d0-config\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516531 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-metrics-certs\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516545 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/305da833-682f-4eac-a135-e06598b0a179-metrics-tls\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516585 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzq4\" (UniqueName: \"kubernetes.io/projected/9aacd7f5-b079-49b8-8792-9a27f38b29f9-kube-api-access-lvzq4\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516613 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516628 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-node-bootstrap-token\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516682 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-images\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516699 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-csi-data-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516738 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-srv-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516794 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bef2657-6cef-438b-965b-ad22a37457d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516813 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjln\" (UniqueName: \"kubernetes.io/projected/ca535148-cfdc-49e0-956b-c848b27b1a1a-kube-api-access-pgjln\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516828 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-certs\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516863 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997l2\" (UniqueName: \"kubernetes.io/projected/34aba487-9181-4d88-9d12-ee2ff45ef349-kube-api-access-997l2\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516888 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rc5\" (UniqueName: \"kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516958 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516977 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c07648d4-9c54-43ab-8446-762760141c94-trusted-ca\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.516992 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c07648d4-9c54-43ab-8446-762760141c94-metrics-tls\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517065 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517083 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-srv-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517122 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-default-certificate\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517158 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e4549-b737-472c-a86b-0b3ea110d7f1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517196 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517213 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbmf\" (UniqueName: \"kubernetes.io/projected/c45e4549-b737-472c-a86b-0b3ea110d7f1-kube-api-access-cxbmf\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517277 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4m8\" (UniqueName: \"kubernetes.io/projected/305da833-682f-4eac-a135-e06598b0a179-kube-api-access-cn4m8\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517293 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517308 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aba487-9181-4d88-9d12-ee2ff45ef349-tmpfs\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517324 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517362 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517388 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-serving-cert\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517433 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517452 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517478 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517515 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b69851e-84ff-4014-8abe-5d28b0180416-serving-cert\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517532 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-cabundle\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517582 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwzd\" (UniqueName: \"kubernetes.io/projected/abeb1c4f-7981-43be-b8f8-4df803599a4d-kube-api-access-wfwzd\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517604 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-config\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517620 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517636 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97424a80-37c4-4737-acac-de2182271f8d-proxy-tls\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517703 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517777 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bef2657-6cef-438b-965b-ad22a37457d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517880 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-apiservice-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517905 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxzj\" (UniqueName: \"kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517938 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkn4\" (UniqueName: \"kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517955 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5wk\" (UniqueName: \"kubernetes.io/projected/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-kube-api-access-wv5wk\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.517970 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac74ed8-7277-42a5-b155-96f047e1b662-service-ca-bundle\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518074 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-encryption-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518097 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9cc\" (UniqueName: \"kubernetes.io/projected/0b69851e-84ff-4014-8abe-5d28b0180416-kube-api-access-tb9cc\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518157 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518174 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-plugins-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518190 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9962d63-494b-466a-a0f4-feeaf089b3cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518207 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npswk\" (UniqueName: \"kubernetes.io/projected/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-kube-api-access-npswk\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518271 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit-dir\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518289 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518325 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fe777f4-152f-4849-88c8-d7be6ec7faf4-cert\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518340 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-mountpoint-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518355 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aacd7f5-b079-49b8-8792-9a27f38b29f9-config\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518370 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-stats-auth\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518406 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcjfp\" (UniqueName: \"kubernetes.io/projected/8ac74ed8-7277-42a5-b155-96f047e1b662-kube-api-access-mcjfp\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518435 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518472 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pg2\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-kube-api-access-w5pg2\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.518551 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.018527641 +0000 UTC m=+143.770317845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518610 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-image-import-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518639 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wlkk\" (UniqueName: \"kubernetes.io/projected/5504f1d6-044f-4e0c-881d-196123a56998-kube-api-access-9wlkk\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518673 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-client\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518706 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-node-pullsecrets\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518723 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk49c\" (UniqueName: \"kubernetes.io/projected/2c594333-c21f-49ee-9519-f86dc5bca7f1-kube-api-access-vk49c\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.518795 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-registration-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.519455 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.519477 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bef2657-6cef-438b-965b-ad22a37457d0-config\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.520735 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.520800 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521283 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdb4\" (UniqueName: \"kubernetes.io/projected/0fe777f4-152f-4849-88c8-d7be6ec7faf4-kube-api-access-9pdb4\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521313 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-webhook-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521344 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvtg\" (UniqueName: \"kubernetes.io/projected/bf485bd8-a407-4ead-ba46-ef55f540c63e-kube-api-access-tqvtg\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521364 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521384 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqw69\" (UniqueName: \"kubernetes.io/projected/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-kube-api-access-hqw69\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521457 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521475 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aacd7f5-b079-49b8-8792-9a27f38b29f9-serving-cert\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521579 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521618 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521695 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521713 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521731 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c594333-c21f-49ee-9519-f86dc5bca7f1-metrics-tls\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521748 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-key\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521830 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-socket-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521875 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521894 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521922 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lck4j\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521940 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9962d63-494b-466a-a0f4-feeaf089b3cc-proxy-tls\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521955 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.521993 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.522010 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9fjt\" (UniqueName: \"kubernetes.io/projected/97424a80-37c4-4737-acac-de2182271f8d-kube-api-access-d9fjt\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.522359 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-image-import-ca\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.522402 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.522487 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-node-pullsecrets\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.522730 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.523751 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.535997 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97424a80-37c4-4737-acac-de2182271f8d-images\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.536608 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-profile-collector-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.538289 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.538492 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-encryption-config\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.538993 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abeb1c4f-7981-43be-b8f8-4df803599a4d-audit-dir\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539343 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b69851e-84ff-4014-8abe-5d28b0180416-serving-cert\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539385 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539414 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bef2657-6cef-438b-965b-ad22a37457d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539436 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-serving-cert\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539654 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abeb1c4f-7981-43be-b8f8-4df803599a4d-etcd-client\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539657 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97424a80-37c4-4737-acac-de2182271f8d-proxy-tls\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.539869 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.540102 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.541742 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-config\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.542909 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/205da08e-f36b-4980-b220-8d7f456d1863-srv-cert\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.544377 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.545660 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.547455 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b69851e-84ff-4014-8abe-5d28b0180416-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.549093 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.551309 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.560471 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.562348 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.563085 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c594333-c21f-49ee-9519-f86dc5bca7f1-metrics-tls\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.566236 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxzj\" (UniqueName: \"kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj\") pod \"route-controller-manager-6576b87f9c-2c4xn\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.586637 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bef2657-6cef-438b-965b-ad22a37457d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j45m5\" (UID: \"4bef2657-6cef-438b-965b-ad22a37457d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.590069 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.601721 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk49c\" (UniqueName: \"kubernetes.io/projected/2c594333-c21f-49ee-9519-f86dc5bca7f1-kube-api-access-vk49c\") pod \"dns-operator-744455d44c-9nwkf\" (UID: \"2c594333-c21f-49ee-9519-f86dc5bca7f1\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.619009 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623425 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nkn4\" (UniqueName: \"kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623450 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5wk\" (UniqueName: \"kubernetes.io/projected/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-kube-api-access-wv5wk\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623468 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac74ed8-7277-42a5-b155-96f047e1b662-service-ca-bundle\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623484 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npswk\" (UniqueName: \"kubernetes.io/projected/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-kube-api-access-npswk\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623505 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623522 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-plugins-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623536 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9962d63-494b-466a-a0f4-feeaf089b3cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623556 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fe777f4-152f-4849-88c8-d7be6ec7faf4-cert\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623570 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-mountpoint-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623583 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aacd7f5-b079-49b8-8792-9a27f38b29f9-config\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623597 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-stats-auth\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623613 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcjfp\" (UniqueName: \"kubernetes.io/projected/8ac74ed8-7277-42a5-b155-96f047e1b662-kube-api-access-mcjfp\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623628 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5pg2\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-kube-api-access-w5pg2\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623647 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wlkk\" (UniqueName: \"kubernetes.io/projected/5504f1d6-044f-4e0c-881d-196123a56998-kube-api-access-9wlkk\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623664 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-registration-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623679 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdb4\" (UniqueName: \"kubernetes.io/projected/0fe777f4-152f-4849-88c8-d7be6ec7faf4-kube-api-access-9pdb4\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623695 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-webhook-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623712 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvtg\" (UniqueName: \"kubernetes.io/projected/bf485bd8-a407-4ead-ba46-ef55f540c63e-kube-api-access-tqvtg\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623736 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aacd7f5-b079-49b8-8792-9a27f38b29f9-serving-cert\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623754 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-key\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623785 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-socket-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623802 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623821 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623847 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9962d63-494b-466a-a0f4-feeaf089b3cc-proxy-tls\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623862 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623889 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305da833-682f-4eac-a135-e06598b0a179-config-volume\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623924 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4bv\" (UniqueName: \"kubernetes.io/projected/c9962d63-494b-466a-a0f4-feeaf089b3cc-kube-api-access-dz4bv\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623940 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzq4\" (UniqueName: \"kubernetes.io/projected/9aacd7f5-b079-49b8-8792-9a27f38b29f9-kube-api-access-lvzq4\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623955 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-metrics-certs\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623969 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/305da833-682f-4eac-a135-e06598b0a179-metrics-tls\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623982 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.623997 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-csi-data-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624010 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-node-bootstrap-token\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624029 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjln\" (UniqueName: \"kubernetes.io/projected/ca535148-cfdc-49e0-956b-c848b27b1a1a-kube-api-access-pgjln\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624045 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-certs\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624059 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997l2\" (UniqueName: \"kubernetes.io/projected/34aba487-9181-4d88-9d12-ee2ff45ef349-kube-api-access-997l2\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624074 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rc5\" (UniqueName: \"kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624099 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c07648d4-9c54-43ab-8446-762760141c94-trusted-ca\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624115 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-default-certificate\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624129 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c07648d4-9c54-43ab-8446-762760141c94-metrics-tls\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624150 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-srv-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624167 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e4549-b737-472c-a86b-0b3ea110d7f1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624185 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbmf\" (UniqueName: \"kubernetes.io/projected/c45e4549-b737-472c-a86b-0b3ea110d7f1-kube-api-access-cxbmf\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624206 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4m8\" (UniqueName: \"kubernetes.io/projected/305da833-682f-4eac-a135-e06598b0a179-kube-api-access-cn4m8\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624220 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624235 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aba487-9181-4d88-9d12-ee2ff45ef349-tmpfs\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624248 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624292 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-cabundle\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624311 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9cc\" (UniqueName: \"kubernetes.io/projected/0b69851e-84ff-4014-8abe-5d28b0180416-kube-api-access-tb9cc\") pod \"authentication-operator-69f744f599-tq8vh\" (UID: \"0b69851e-84ff-4014-8abe-5d28b0180416\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624335 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.624384 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-apiservice-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.624574 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.12456353 +0000 UTC m=+143.876353734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.625746 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac74ed8-7277-42a5-b155-96f047e1b662-service-ca-bundle\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.626337 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305da833-682f-4eac-a135-e06598b0a179-config-volume\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.626539 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-registration-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.626587 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-socket-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.628574 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-apiservice-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.629249 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.630565 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.630920 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-mountpoint-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.630992 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-plugins-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.633640 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.634249 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aacd7f5-b079-49b8-8792-9a27f38b29f9-config\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.637218 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c07648d4-9c54-43ab-8446-762760141c94-metrics-tls\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.638170 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-key\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.638863 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.638907 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.639181 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aba487-9181-4d88-9d12-ee2ff45ef349-tmpfs\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.640362 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9962d63-494b-466a-a0f4-feeaf089b3cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.640546 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-default-certificate\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.641211 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-signing-cabundle\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.643143 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fe777f4-152f-4849-88c8-d7be6ec7faf4-cert\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.644014 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.644111 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ca535148-cfdc-49e0-956b-c848b27b1a1a-csi-data-dir\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.644317 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.644907 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-metrics-certs\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.646288 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8ac74ed8-7277-42a5-b155-96f047e1b662-stats-auth\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.646583 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c07648d4-9c54-43ab-8446-762760141c94-trusted-ca\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.647571 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aacd7f5-b079-49b8-8792-9a27f38b29f9-serving-cert\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.648649 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/305da833-682f-4eac-a135-e06598b0a179-metrics-tls\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.653012 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w24pz"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.653890 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.654462 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-certs\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.657188 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bf485bd8-a407-4ead-ba46-ef55f540c63e-srv-cert\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.657270 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aba487-9181-4d88-9d12-ee2ff45ef349-webhook-cert\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.657415 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e4549-b737-472c-a86b-0b3ea110d7f1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.657826 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9fjt\" (UniqueName: \"kubernetes.io/projected/97424a80-37c4-4737-acac-de2182271f8d-kube-api-access-d9fjt\") pod \"machine-config-operator-74547568cd-ft6rt\" (UID: \"97424a80-37c4-4737-acac-de2182271f8d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.657926 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9962d63-494b-466a-a0f4-feeaf089b3cc-proxy-tls\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.658028 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5504f1d6-044f-4e0c-881d-196123a56998-node-bootstrap-token\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.665303 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqw69\" (UniqueName: \"kubernetes.io/projected/c201ee2d-0b9b-4737-b0d9-091ccd258e1e-kube-api-access-hqw69\") pod \"control-plane-machine-set-operator-78cbb6b69f-6zltn\" (UID: \"c201ee2d-0b9b-4737-b0d9-091ccd258e1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.687837 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2scj2\" (UniqueName: \"kubernetes.io/projected/115e6f25-1465-48f0-bfde-c6d51d98b2c5-kube-api-access-2scj2\") pod \"migrator-59844c95c7-7vbcr\" (UID: \"115e6f25-1465-48f0-bfde-c6d51d98b2c5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.704103 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.720539 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-42c5d"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.721390 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.729573 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4ttv\" (UniqueName: \"kubernetes.io/projected/205da08e-f36b-4980-b220-8d7f456d1863-kube-api-access-j4ttv\") pod \"catalog-operator-68c6474976-hgj8p\" (UID: \"205da08e-f36b-4980-b220-8d7f456d1863\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.733189 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.733485 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.233464981 +0000 UTC m=+143.985255185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.733564 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.733964 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.233957813 +0000 UTC m=+143.985748017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.749295 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwzd\" (UniqueName: \"kubernetes.io/projected/abeb1c4f-7981-43be-b8f8-4df803599a4d-kube-api-access-wfwzd\") pod \"apiserver-76f77b778f-xfscp\" (UID: \"abeb1c4f-7981-43be-b8f8-4df803599a4d\") " pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: W1204 09:21:16.752868 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed1c595_af7f_4f3f_bcb2_7da2461d1d4b.slice/crio-223c9e5e5df9f594e3a45394a9bd162f52aa4b65a5ccc41b70f071d45efec58d WatchSource:0}: Error finding container 223c9e5e5df9f594e3a45394a9bd162f52aa4b65a5ccc41b70f071d45efec58d: Status 404 returned error can't find the container with id 223c9e5e5df9f594e3a45394a9bd162f52aa4b65a5ccc41b70f071d45efec58d Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.769030 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24c7a712-e7b0-49ef-8e4c-80fbec70c8f4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-47x5z\" (UID: \"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.785606 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lck4j\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.790463 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h8dcv"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.791907 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmlf5"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.808625 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-prd7v"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.823091 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.823340 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.823804 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.835217 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.835849 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.836026 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.3360055 +0000 UTC m=+144.087795704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.847713 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nkn4\" (UniqueName: \"kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4\") pod \"marketplace-operator-79b997595-5rpzh\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.853621 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.867272 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5wk\" (UniqueName: \"kubernetes.io/projected/9d3eee6b-070a-4ef5-b908-50fcc90e9ad6-kube-api-access-wv5wk\") pod \"service-ca-9c57cc56f-hf9vt\" (UID: \"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6\") " pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.878440 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4bv\" (UniqueName: \"kubernetes.io/projected/c9962d63-494b-466a-a0f4-feeaf089b3cc-kube-api-access-dz4bv\") pod \"machine-config-controller-84d6567774-22k9m\" (UID: \"c9962d63-494b-466a-a0f4-feeaf089b3cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.890394 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.904380 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.905448 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzq4\" (UniqueName: \"kubernetes.io/projected/9aacd7f5-b079-49b8-8792-9a27f38b29f9-kube-api-access-lvzq4\") pod \"service-ca-operator-777779d784-drgr2\" (UID: \"9aacd7f5-b079-49b8-8792-9a27f38b29f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.905460 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.908037 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wlkk\" (UniqueName: \"kubernetes.io/projected/5504f1d6-044f-4e0c-881d-196123a56998-kube-api-access-9wlkk\") pod \"machine-config-server-ngpxk\" (UID: \"5504f1d6-044f-4e0c-881d-196123a56998\") " pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.908995 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8q2hw"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.912161 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.915062 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.919448 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5"] Dec 04 09:21:16 crc kubenswrapper[4841]: W1204 09:21:16.919936 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88a93a4e_e527_443f_a5d9_fdd5dca46c1b.slice/crio-51982f0770d8b9019479200ad44c7f0bf842298e5d105bece38f39c62dfc0d91 WatchSource:0}: Error finding container 51982f0770d8b9019479200ad44c7f0bf842298e5d105bece38f39c62dfc0d91: Status 404 returned error can't find the container with id 51982f0770d8b9019479200ad44c7f0bf842298e5d105bece38f39c62dfc0d91 Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.920997 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" event={"ID":"3d4aee6f-c06c-452c-ad42-b6079e00f178","Type":"ContainerStarted","Data":"764f3d6304750738761133352343cab070717ce9c6c28d67159d6c7e371a390f"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.923441 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-prd7v" event={"ID":"87a01003-7343-4fba-ada1-2be090ebc0dd","Type":"ContainerStarted","Data":"a8948b9ded9fd9bc0ac142ff2970287aa8ea51aefb2e6d472a1386827eb70293"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.926174 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" event={"ID":"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b","Type":"ContainerStarted","Data":"223c9e5e5df9f594e3a45394a9bd162f52aa4b65a5ccc41b70f071d45efec58d"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.930110 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr"] Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.932493 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" event={"ID":"fb9caa10-a6d6-4277-b42d-aab13376f4d2","Type":"ContainerStarted","Data":"3667aea925a50b80d7200962b422e7b6906f4b3541c6ab8fe6742fb9ff1ac620"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.932519 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" event={"ID":"fb9caa10-a6d6-4277-b42d-aab13376f4d2","Type":"ContainerStarted","Data":"ade5ec804f12d2708dfd4de61105659f2b5819d6f32f44397c9f760391eeb831"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.932843 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.935811 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npswk\" (UniqueName: \"kubernetes.io/projected/c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc-kube-api-access-npswk\") pod \"multus-admission-controller-857f4d67dd-ql5vn\" (UID: \"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.936417 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:16 crc kubenswrapper[4841]: E1204 09:21:16.936743 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.436729444 +0000 UTC m=+144.188519648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.944422 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4m8\" (UniqueName: \"kubernetes.io/projected/305da833-682f-4eac-a135-e06598b0a179-kube-api-access-cn4m8\") pod \"dns-default-jv68h\" (UID: \"305da833-682f-4eac-a135-e06598b0a179\") " pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.945901 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" Dec 04 09:21:16 crc kubenswrapper[4841]: W1204 09:21:16.949837 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa89e0be_c464_47b6_b682_eea7a3d34bef.slice/crio-df4dcc1b7b11e4c51b056a74c606b249703a03253c264c2c9f69eb8da079ed29 WatchSource:0}: Error finding container df4dcc1b7b11e4c51b056a74c606b249703a03253c264c2c9f69eb8da079ed29: Status 404 returned error can't find the container with id df4dcc1b7b11e4c51b056a74c606b249703a03253c264c2c9f69eb8da079ed29 Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.952839 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" event={"ID":"1c255032-be85-465e-97e8-e0d47337c099","Type":"ContainerStarted","Data":"89b12c5c0ced6a192009af079b7ea9becb6f3942da7b0844697e01582e165fbc"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.952904 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" event={"ID":"1c255032-be85-465e-97e8-e0d47337c099","Type":"ContainerStarted","Data":"0b9b9ea82475c70342d47bf576da84c70a219fbb82d392a829e483029ffba0e3"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.955016 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.959676 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.962867 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdb4\" (UniqueName: \"kubernetes.io/projected/0fe777f4-152f-4849-88c8-d7be6ec7faf4-kube-api-access-9pdb4\") pod \"ingress-canary-8v24d\" (UID: \"0fe777f4-152f-4849-88c8-d7be6ec7faf4\") " pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.965235 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" event={"ID":"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13","Type":"ContainerStarted","Data":"3e9daca449d7b0def04dc266a4d228265c2913d33d7bcb58475b063388eef7ed"} Dec 04 09:21:16 crc kubenswrapper[4841]: W1204 09:21:16.966122 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45203b92_de55_44d6_a0f8_c2559b74d65d.slice/crio-ccb839242dc83d11acafcd7aa24ab34b0cb43cf140f2d1e600495c25ae00b544 WatchSource:0}: Error finding container ccb839242dc83d11acafcd7aa24ab34b0cb43cf140f2d1e600495c25ae00b544: Status 404 returned error can't find the container with id ccb839242dc83d11acafcd7aa24ab34b0cb43cf140f2d1e600495c25ae00b544 Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.969377 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h8dcv" event={"ID":"4be459f8-8d18-441f-ae50-b56bdcf9367a","Type":"ContainerStarted","Data":"c48f7f63b930424c76f4c88eec187b2214aec8d90c43f1cef416efc08c960aad"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.975677 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" event={"ID":"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d","Type":"ContainerStarted","Data":"935d8a09ffa65f411e8097a856a636bd09014c54b5b0264e5c2544be6028900d"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.981500 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" event={"ID":"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe","Type":"ContainerStarted","Data":"2ba13c878470cb0999fd86965e3853eaf0f8c8535e698565eb7ffc4197807719"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.983568 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" event={"ID":"4b539de7-f02b-4f85-b40d-673095e3d1f9","Type":"ContainerStarted","Data":"92977477c47d77ea540a61fd12523682356ad451b00c0332dd887aacd5c28404"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.985513 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w24pz" event={"ID":"13aab721-2ebe-42c3-b749-bcba3b51a71b","Type":"ContainerStarted","Data":"bc1f7d829b03121fd55718e24f0c28a34d8c7fb84a2808049eab3c02c8ea6031"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.985539 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w24pz" event={"ID":"13aab721-2ebe-42c3-b749-bcba3b51a71b","Type":"ContainerStarted","Data":"6bba7f51f86e579c93f1a6c07fb9641ec3f3c66ed8c92ae3eee39437c24a0f24"} Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.985751 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.986552 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcjfp\" (UniqueName: \"kubernetes.io/projected/8ac74ed8-7277-42a5-b155-96f047e1b662-kube-api-access-mcjfp\") pod \"router-default-5444994796-xjqtj\" (UID: \"8ac74ed8-7277-42a5-b155-96f047e1b662\") " pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.995367 4841 patch_prober.go:28] interesting pod/console-operator-58897d9998-w24pz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 04 09:21:16 crc kubenswrapper[4841]: I1204 09:21:16.995422 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w24pz" podUID="13aab721-2ebe-42c3-b749-bcba3b51a71b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.005269 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.014177 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbmf\" (UniqueName: \"kubernetes.io/projected/c45e4549-b737-472c-a86b-0b3ea110d7f1-kube-api-access-cxbmf\") pod \"package-server-manager-789f6589d5-v7mdz\" (UID: \"c45e4549-b737-472c-a86b-0b3ea110d7f1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.021844 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjln\" (UniqueName: \"kubernetes.io/projected/ca535148-cfdc-49e0-956b-c848b27b1a1a-kube-api-access-pgjln\") pod \"csi-hostpathplugin-fhg4j\" (UID: \"ca535148-cfdc-49e0-956b-c848b27b1a1a\") " pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.042065 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.042205 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.542183818 +0000 UTC m=+144.293974032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.042367 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.042676 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.542666469 +0000 UTC m=+144.294456673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.043540 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.047819 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5pg2\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-kube-api-access-w5pg2\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.053389 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.065254 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c07648d4-9c54-43ab-8446-762760141c94-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qr6vs\" (UID: \"c07648d4-9c54-43ab-8446-762760141c94\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.065350 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.076174 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.083548 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997l2\" (UniqueName: \"kubernetes.io/projected/34aba487-9181-4d88-9d12-ee2ff45ef349-kube-api-access-997l2\") pod \"packageserver-d55dfcdfc-867ft\" (UID: \"34aba487-9181-4d88-9d12-ee2ff45ef349\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.085228 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.096477 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ngpxk" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.104442 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvtg\" (UniqueName: \"kubernetes.io/projected/bf485bd8-a407-4ead-ba46-ef55f540c63e-kube-api-access-tqvtg\") pod \"olm-operator-6b444d44fb-8x6h4\" (UID: \"bf485bd8-a407-4ead-ba46-ef55f540c63e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.107308 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.119089 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8v24d" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.123240 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rc5\" (UniqueName: \"kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5\") pod \"collect-profiles-29413995-4fqg9\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.143571 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.144213 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.146152 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.646131173 +0000 UTC m=+144.397921377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.246735 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.247021 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.747009871 +0000 UTC m=+144.498800065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.272538 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.283328 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.312982 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.313586 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z"] Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.324374 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.335840 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.337452 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nwkf"] Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.352370 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.352604 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.852586238 +0000 UTC m=+144.604376442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.454139 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.454440 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:17.954424469 +0000 UTC m=+144.706214673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.555660 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.555826 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.05580061 +0000 UTC m=+144.807590814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.556368 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.556739 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.056732224 +0000 UTC m=+144.808522428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.661224 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.661474 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.161460389 +0000 UTC m=+144.913250593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.743115 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-24xfh" podStartSLOduration=126.74305811 podStartE2EDuration="2m6.74305811s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:17.741813609 +0000 UTC m=+144.493603813" watchObservedRunningTime="2025-12-04 09:21:17.74305811 +0000 UTC m=+144.494848304" Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.762446 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.762935 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.262920302 +0000 UTC m=+145.014710506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.863190 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.863524 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.363510473 +0000 UTC m=+145.115300677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.964580 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:17 crc kubenswrapper[4841]: E1204 09:21:17.965042 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.465022656 +0000 UTC m=+145.216812930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.983741 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfscp"] Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.989544 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.990571 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ngpxk" event={"ID":"5504f1d6-044f-4e0c-881d-196123a56998","Type":"ContainerStarted","Data":"cffbc8abc1faa6b3de53c5f676ade9bd46c3980df7d3bb8e465b2d7df05d97b8"} Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.992483 4841 generic.go:334] "Generic (PLEG): container finished" podID="88a93a4e-e527-443f-a5d9-fdd5dca46c1b" containerID="b3bb13fe1ea05522af9c8e14c1323f597288dfcfd71b235ab99178d0b62b912c" exitCode=0 Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.992514 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" event={"ID":"88a93a4e-e527-443f-a5d9-fdd5dca46c1b","Type":"ContainerDied","Data":"b3bb13fe1ea05522af9c8e14c1323f597288dfcfd71b235ab99178d0b62b912c"} Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.992569 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" event={"ID":"88a93a4e-e527-443f-a5d9-fdd5dca46c1b","Type":"ContainerStarted","Data":"51982f0770d8b9019479200ad44c7f0bf842298e5d105bece38f39c62dfc0d91"} Dec 04 09:21:17 crc kubenswrapper[4841]: I1204 09:21:17.999205 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" event={"ID":"c8845e08-1385-496a-b4e7-90151305bac3","Type":"ContainerStarted","Data":"9851c3bcaf204a64ddf1b88fb2fb4e890f4bbf4c411110bb747e7d498deeb202"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.002651 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h8dcv" event={"ID":"4be459f8-8d18-441f-ae50-b56bdcf9367a","Type":"ContainerStarted","Data":"37861ea93df4097290be98cf1c62d793525735f9637febacf18bd1a418b34f28"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.003158 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.009260 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" event={"ID":"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4","Type":"ContainerStarted","Data":"af60b824bea45b90c5de38cadd563db5277e2f6c3b91c1f0b9f6bf8126a472ff"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.014844 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" event={"ID":"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13","Type":"ContainerStarted","Data":"69d044a96610f3c43839a20c1aee8645e7806ab64eaedaf5b94e7a804dc84779"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.015832 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.020693 4841 patch_prober.go:28] interesting pod/downloads-7954f5f757-h8dcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.020748 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h8dcv" podUID="4be459f8-8d18-441f-ae50-b56bdcf9367a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.020694 4841 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m9mhm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.021020 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.025007 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" event={"ID":"45203b92-de55-44d6-a0f8-c2559b74d65d","Type":"ContainerStarted","Data":"ccb839242dc83d11acafcd7aa24ab34b0cb43cf140f2d1e600495c25ae00b544"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.027801 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" event={"ID":"4b539de7-f02b-4f85-b40d-673095e3d1f9","Type":"ContainerStarted","Data":"ca722672dbefc7d5d76c7c1419762128a3efa25b187dc24bf5884da72f103e56"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.038553 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq8vh"] Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.042727 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" event={"ID":"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b","Type":"ContainerStarted","Data":"8884be6561a6fbc97f784ad4bb605d2231faefef1449e8bcdb7a9196e22adb5e"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.042808 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" event={"ID":"6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b","Type":"ContainerStarted","Data":"08f9ab64e1fd2594d69e602d9fed18ec07d9de36be61dcd49b73b2baef0cfe09"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.046608 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" event={"ID":"fa89e0be-c464-47b6-b682-eea7a3d34bef","Type":"ContainerStarted","Data":"df4dcc1b7b11e4c51b056a74c606b249703a03253c264c2c9f69eb8da079ed29"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.066121 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.066401 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.566386837 +0000 UTC m=+145.318177041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.067090 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" event={"ID":"3d4aee6f-c06c-452c-ad42-b6079e00f178","Type":"ContainerStarted","Data":"47cab59c48ddf001298ff59f85afa590888911fdc16ebd000d73fe737d9425f2"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.067142 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" event={"ID":"3d4aee6f-c06c-452c-ad42-b6079e00f178","Type":"ContainerStarted","Data":"ed7a5f35a94a7800294313b6ad7ef235ff8addc05fa192a2f08cb1d890e23c3b"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.076617 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" event={"ID":"2ff4059e-7093-443d-9cc1-c24a2f7e912d","Type":"ContainerStarted","Data":"7a3773ac887c67d8ca8eddc2e22ffb761172fe67417f3f1b3e9dabd1f464ba68"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.081412 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xjqtj" event={"ID":"8ac74ed8-7277-42a5-b155-96f047e1b662","Type":"ContainerStarted","Data":"44d3a1795e37074874a43b3ee6280b605d38b7bc0ad9b4baeeffb7b734e17ad2"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.093557 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" event={"ID":"2c594333-c21f-49ee-9519-f86dc5bca7f1","Type":"ContainerStarted","Data":"c2347c7f0776acee4912f781dc0a3e1bc00f52edfb75a5e88faabd06c267c575"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.104223 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" event={"ID":"4bef2657-6cef-438b-965b-ad22a37457d0","Type":"ContainerStarted","Data":"49201cb3b75a56eda3b453264670de2390a6f33b1fc075e8e1554dcd07db466a"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.104277 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" event={"ID":"4bef2657-6cef-438b-965b-ad22a37457d0","Type":"ContainerStarted","Data":"10cf1e5ffb16536237288aba29ecca723a4d9f09eb490f8e86b96919caf51b4b"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.107464 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-prd7v" event={"ID":"87a01003-7343-4fba-ada1-2be090ebc0dd","Type":"ContainerStarted","Data":"32c6d15b11df8f67e05026632dc4f074567d804625e4cc4fdea1f66fa3833c85"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.115090 4841 generic.go:334] "Generic (PLEG): container finished" podID="bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d" containerID="0ab757ef906d7d81097664dbceb9151f351cf5cd4ee962ce05b9df8a9dc59403" exitCode=0 Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.115862 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" event={"ID":"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d","Type":"ContainerDied","Data":"0ab757ef906d7d81097664dbceb9151f351cf5cd4ee962ce05b9df8a9dc59403"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.123453 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" event={"ID":"fb45e445-79c2-407e-bc3d-630465ec46ae","Type":"ContainerStarted","Data":"cc182ae347d0e9b040f45900291ba09ea13017521b6d96f9e9e2f4d90d513394"} Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.137149 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-w24pz" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.167754 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.172160 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.672145157 +0000 UTC m=+145.423935361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.268953 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jf82v" podStartSLOduration=126.265056894 podStartE2EDuration="2m6.265056894s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.228718337 +0000 UTC m=+144.980508551" watchObservedRunningTime="2025-12-04 09:21:18.265056894 +0000 UTC m=+145.016847098" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.270140 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.271415 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.771400555 +0000 UTC m=+145.523190759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.371389 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.371736 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.871719439 +0000 UTC m=+145.623509643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.475398 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.475630 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.975605782 +0000 UTC m=+145.727395986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.475677 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.478234 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:18.978224139 +0000 UTC m=+145.730014333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.578963 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.579151 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.079113457 +0000 UTC m=+145.830903661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.579605 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.579972 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.079958238 +0000 UTC m=+145.831748442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.585547 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-w24pz" podStartSLOduration=127.585535288 podStartE2EDuration="2m7.585535288s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.583821275 +0000 UTC m=+145.335611489" watchObservedRunningTime="2025-12-04 09:21:18.585535288 +0000 UTC m=+145.337325492" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.680308 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.680738 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.180723653 +0000 UTC m=+145.932513857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.718549 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-42c5d" podStartSLOduration=126.718524477 podStartE2EDuration="2m6.718524477s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.710560127 +0000 UTC m=+145.462350341" watchObservedRunningTime="2025-12-04 09:21:18.718524477 +0000 UTC m=+145.470314721" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.738972 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppqhq" podStartSLOduration=127.738951794 podStartE2EDuration="2m7.738951794s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.738114942 +0000 UTC m=+145.489905146" watchObservedRunningTime="2025-12-04 09:21:18.738951794 +0000 UTC m=+145.490741998" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.784276 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.784599 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.284587847 +0000 UTC m=+146.036378051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.799704 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" podStartSLOduration=127.799682017 podStartE2EDuration="2m7.799682017s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.794917747 +0000 UTC m=+145.546707951" watchObservedRunningTime="2025-12-04 09:21:18.799682017 +0000 UTC m=+145.551472221" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.885609 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.886014 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.385998938 +0000 UTC m=+146.137789142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.911836 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j45m5" podStartSLOduration=126.91182132 podStartE2EDuration="2m6.91182132s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.911540723 +0000 UTC m=+145.663330967" watchObservedRunningTime="2025-12-04 09:21:18.91182132 +0000 UTC m=+145.663611524" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.944527 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-h8dcv" podStartSLOduration=127.944513466 podStartE2EDuration="2m7.944513466s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.942378492 +0000 UTC m=+145.694168696" watchObservedRunningTime="2025-12-04 09:21:18.944513466 +0000 UTC m=+145.696303670" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.983960 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-prd7v" podStartSLOduration=127.983940212 podStartE2EDuration="2m7.983940212s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:18.982184338 +0000 UTC m=+145.733974542" watchObservedRunningTime="2025-12-04 09:21:18.983940212 +0000 UTC m=+145.735730416" Dec 04 09:21:18 crc kubenswrapper[4841]: I1204 09:21:18.991386 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:18 crc kubenswrapper[4841]: E1204 09:21:18.991697 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.491686547 +0000 UTC m=+146.243476751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.064860 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.067877 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.094109 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ql5vn"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.094209 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.094560 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.594545215 +0000 UTC m=+146.346335419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.155460 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.159270 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.159305 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.170638 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.187483 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.199804 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.200613 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.700600804 +0000 UTC m=+146.452391008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.209098 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" event={"ID":"4b539de7-f02b-4f85-b40d-673095e3d1f9","Type":"ContainerStarted","Data":"a18a97249b150aa1e09ae32be4d7d6699f98b70293bcf4a68d55130351f3b2e0"} Dec 04 09:21:19 crc kubenswrapper[4841]: W1204 09:21:19.224398 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod205da08e_f36b_4980_b220_8d7f456d1863.slice/crio-029022285fddc185b58b0152eb21e6ec4deee7aec48c81499a228d789105263d WatchSource:0}: Error finding container 029022285fddc185b58b0152eb21e6ec4deee7aec48c81499a228d789105263d: Status 404 returned error can't find the container with id 029022285fddc185b58b0152eb21e6ec4deee7aec48c81499a228d789105263d Dec 04 09:21:19 crc kubenswrapper[4841]: W1204 09:21:19.225939 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07648d4_9c54_43ab_8446_762760141c94.slice/crio-ba2eccea9d2cd955df4d7f5e52357b61ee3cb32b8e9453cf8c2b51dcbc280603 WatchSource:0}: Error finding container ba2eccea9d2cd955df4d7f5e52357b61ee3cb32b8e9453cf8c2b51dcbc280603: Status 404 returned error can't find the container with id ba2eccea9d2cd955df4d7f5e52357b61ee3cb32b8e9453cf8c2b51dcbc280603 Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.245606 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" event={"ID":"2c594333-c21f-49ee-9519-f86dc5bca7f1","Type":"ContainerStarted","Data":"0ccacca49d2831acd475709dec60bd98181d963ab32d43be42b9ff728e2b108b"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.245646 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" event={"ID":"2c594333-c21f-49ee-9519-f86dc5bca7f1","Type":"ContainerStarted","Data":"dcc25445c562f218940f77bd576a75b9d398a8d172b93a7adbda7bf1115fc172"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.252985 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-m8m9l" podStartSLOduration=128.252974487 podStartE2EDuration="2m8.252974487s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.251720485 +0000 UTC m=+146.003510689" watchObservedRunningTime="2025-12-04 09:21:19.252974487 +0000 UTC m=+146.004764691" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.254681 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" event={"ID":"fa89e0be-c464-47b6-b682-eea7a3d34bef","Type":"ContainerStarted","Data":"9421baa142e99c04ef7a1dc1e7f46e6134df9de53e867e24e33edea306fd73c2"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.257543 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" event={"ID":"45203b92-de55-44d6-a0f8-c2559b74d65d","Type":"ContainerStarted","Data":"038a38ca67953bf5b16072a0484d27f5f38ae1f836df9c088ba5a6c59daffc5d"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.307816 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.308427 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.808405137 +0000 UTC m=+146.560195341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.313251 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xjqtj" event={"ID":"8ac74ed8-7277-42a5-b155-96f047e1b662","Type":"ContainerStarted","Data":"56035ef06a9763847f924766c0580eb5d2a00d6e3b672c3515fefad44bde2adb"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.324628 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.326399 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv68h"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.338553 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-drgr2"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.373359 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerStarted","Data":"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.373600 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerStarted","Data":"7fa84d47433019298887bf6e54c8df80856e3b2a92abafb4dcdf654f08c990ee"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.407594 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.407615 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8v24d"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.407686 4841 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5rpzh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.407716 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.409268 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9nwkf" podStartSLOduration=128.409243744 podStartE2EDuration="2m8.409243744s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.372983848 +0000 UTC m=+146.124774052" watchObservedRunningTime="2025-12-04 09:21:19.409243744 +0000 UTC m=+146.161033948" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.418608 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.422445 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.422967 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:19.922951619 +0000 UTC m=+146.674741823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.439751 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7pssc" podStartSLOduration=127.439737403 podStartE2EDuration="2m7.439737403s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.437946798 +0000 UTC m=+146.189737002" watchObservedRunningTime="2025-12-04 09:21:19.439737403 +0000 UTC m=+146.191527607" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.446807 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" event={"ID":"c9962d63-494b-466a-a0f4-feeaf089b3cc","Type":"ContainerStarted","Data":"357abd8118290a1a98a7a79c26831c5301e216c511c0232cc02378ffac812daa"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.465377 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.465423 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hf9vt"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.468126 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vv79b" podStartSLOduration=127.468111371 podStartE2EDuration="2m7.468111371s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.467454964 +0000 UTC m=+146.219245168" watchObservedRunningTime="2025-12-04 09:21:19.468111371 +0000 UTC m=+146.219901575" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.475038 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fhg4j"] Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.513168 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" event={"ID":"97424a80-37c4-4737-acac-de2182271f8d","Type":"ContainerStarted","Data":"12589d96b78698da1ce6ba2d741ac0649f357fd733b54a25c146686ee8ea5910"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.515244 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xjqtj" podStartSLOduration=127.5152325 podStartE2EDuration="2m7.5152325s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.51323762 +0000 UTC m=+146.265027824" watchObservedRunningTime="2025-12-04 09:21:19.5152325 +0000 UTC m=+146.267022704" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.524284 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.525395 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.025380477 +0000 UTC m=+146.777170681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.536217 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" podStartSLOduration=127.53619989 podStartE2EDuration="2m7.53619989s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.535154554 +0000 UTC m=+146.286944758" watchObservedRunningTime="2025-12-04 09:21:19.53619989 +0000 UTC m=+146.287990084" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.538130 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" event={"ID":"24c7a712-e7b0-49ef-8e4c-80fbec70c8f4","Type":"ContainerStarted","Data":"f0535617faa41f24414ade63a32c45dbec1fdc08f7ddd8d848deafc9a843f975"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.542786 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" event={"ID":"2ff4059e-7093-443d-9cc1-c24a2f7e912d","Type":"ContainerStarted","Data":"5cb6a96f792a3bab84ddb33472685f92cd956b7d420b8968bdfce48ea568e066"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.560997 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ngpxk" event={"ID":"5504f1d6-044f-4e0c-881d-196123a56998","Type":"ContainerStarted","Data":"bd10bf8e6f33b6dd64db8aa380c419f2fdaf21f5b8185f4ecd1648f5435a8707"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.568088 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" event={"ID":"fb45e445-79c2-407e-bc3d-630465ec46ae","Type":"ContainerStarted","Data":"6edc1b232a23d00ed1b81fd25b108e30b9f631206a12609bb5e50d10d94f58e6"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.568884 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.573974 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" event={"ID":"88a93a4e-e527-443f-a5d9-fdd5dca46c1b","Type":"ContainerStarted","Data":"cc8f1f9fb3683bebdfa084daadf046a99316246066366725c7be374feae2a352"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.574007 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.584373 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-47x5z" podStartSLOduration=127.584354937 podStartE2EDuration="2m7.584354937s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.561163191 +0000 UTC m=+146.312953395" watchObservedRunningTime="2025-12-04 09:21:19.584354937 +0000 UTC m=+146.336145141" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.598953 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" event={"ID":"bcaf6dd5-d377-4fda-866b-d4fd86b1ec3d","Type":"ContainerStarted","Data":"2f6518dbd326d983c70c7491619ce3abc3842ec285c6b91074add899159f68fb"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.601861 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" event={"ID":"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe","Type":"ContainerStarted","Data":"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.602292 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:19 crc kubenswrapper[4841]: W1204 09:21:19.609229 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf485bd8_a407_4ead_ba46_ef55f540c63e.slice/crio-6e381ba41823755429a813bde2d1ecfc001e7665c8e68e17b57bc015adaac4d6 WatchSource:0}: Error finding container 6e381ba41823755429a813bde2d1ecfc001e7665c8e68e17b57bc015adaac4d6: Status 404 returned error can't find the container with id 6e381ba41823755429a813bde2d1ecfc001e7665c8e68e17b57bc015adaac4d6 Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.616891 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ngpxk" podStartSLOduration=6.616870138 podStartE2EDuration="6.616870138s" podCreationTimestamp="2025-12-04 09:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.585327991 +0000 UTC m=+146.337118195" watchObservedRunningTime="2025-12-04 09:21:19.616870138 +0000 UTC m=+146.368660342" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.617870 4841 generic.go:334] "Generic (PLEG): container finished" podID="abeb1c4f-7981-43be-b8f8-4df803599a4d" containerID="4d28d24ce716b75178e7ae85055a4c1917de18bec82cd7ebf7cb963e45f16801" exitCode=0 Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.618050 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" event={"ID":"abeb1c4f-7981-43be-b8f8-4df803599a4d","Type":"ContainerDied","Data":"4d28d24ce716b75178e7ae85055a4c1917de18bec82cd7ebf7cb963e45f16801"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.618081 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" event={"ID":"abeb1c4f-7981-43be-b8f8-4df803599a4d","Type":"ContainerStarted","Data":"e8e34b8e7e0359ae54d4c73f834e3cb2b88941945b1e8bb7fbab099977c22848"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.618301 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt9xr" podStartSLOduration=127.618295464 podStartE2EDuration="2m7.618295464s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.610023225 +0000 UTC m=+146.361813429" watchObservedRunningTime="2025-12-04 09:21:19.618295464 +0000 UTC m=+146.370085668" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.625385 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.625476 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.625515 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.625552 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.625584 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.627786 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.127747443 +0000 UTC m=+146.879537647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.629112 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.654523 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.656447 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" event={"ID":"0b69851e-84ff-4014-8abe-5d28b0180416","Type":"ContainerStarted","Data":"aa61e50ac471aa9d49de232b6c82525ea6b36fb8b8206f05c18ab3802d2934d9"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.656574 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" event={"ID":"0b69851e-84ff-4014-8abe-5d28b0180416","Type":"ContainerStarted","Data":"ed7bc339b6281ddb2371885d574b4a611d4cb24aa458d149a1bfcd0169bbd889"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.658443 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.661063 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.669173 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" event={"ID":"c8845e08-1385-496a-b4e7-90151305bac3","Type":"ContainerStarted","Data":"1be2098ae97aada6bd0bacf87c471a74e046d6ee3feea3111772f9fe6b443e05"} Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.669562 4841 patch_prober.go:28] interesting pod/downloads-7954f5f757-h8dcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.669653 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h8dcv" podUID="4be459f8-8d18-441f-ae50-b56bdcf9367a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.693680 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" podStartSLOduration=127.693665118 podStartE2EDuration="2m7.693665118s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.646740842 +0000 UTC m=+146.398531046" watchObservedRunningTime="2025-12-04 09:21:19.693665118 +0000 UTC m=+146.445455322" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.694711 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" podStartSLOduration=127.694706364 podStartE2EDuration="2m7.694706364s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.687555893 +0000 UTC m=+146.439346087" watchObservedRunningTime="2025-12-04 09:21:19.694706364 +0000 UTC m=+146.446496568" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.705922 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.714035 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.728250 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.729982 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.229951134 +0000 UTC m=+146.981741338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.780941 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.782431 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.793461 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" podStartSLOduration=127.793445358 podStartE2EDuration="2m7.793445358s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.767711708 +0000 UTC m=+146.519501912" watchObservedRunningTime="2025-12-04 09:21:19.793445358 +0000 UTC m=+146.545235562" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.832755 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.834165 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.334154206 +0000 UTC m=+147.085944410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.849082 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" podStartSLOduration=128.849066433 podStartE2EDuration="2m8.849066433s" podCreationTimestamp="2025-12-04 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.803336427 +0000 UTC m=+146.555126631" watchObservedRunningTime="2025-12-04 09:21:19.849066433 +0000 UTC m=+146.600856637" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.940198 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:19 crc kubenswrapper[4841]: E1204 09:21:19.940686 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.440671586 +0000 UTC m=+147.192461780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.940753 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.946123 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8q2hw" podStartSLOduration=127.946107063 podStartE2EDuration="2m7.946107063s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.912818882 +0000 UTC m=+146.664609086" watchObservedRunningTime="2025-12-04 09:21:19.946107063 +0000 UTC m=+146.697897267" Dec 04 09:21:19 crc kubenswrapper[4841]: I1204 09:21:19.976751 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq8vh" podStartSLOduration=127.976734407 podStartE2EDuration="2m7.976734407s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:19.975314391 +0000 UTC m=+146.727104595" watchObservedRunningTime="2025-12-04 09:21:19.976734407 +0000 UTC m=+146.728524611" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.029619 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.042230 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.042618 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.542605971 +0000 UTC m=+147.294396175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.056833 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.080028 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:20 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:20 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:20 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.080301 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.148800 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.149318 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.649302455 +0000 UTC m=+147.401092659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.249993 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.250277 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.750265245 +0000 UTC m=+147.502055449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.351905 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.352382 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.852367524 +0000 UTC m=+147.604157728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.454011 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.454302 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:20.954291099 +0000 UTC m=+147.706081303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.503989 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.504038 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.556787 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.557101 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.057087775 +0000 UTC m=+147.808877979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.658850 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.663911 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.163889673 +0000 UTC m=+147.915679877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.729595 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" event={"ID":"c9962d63-494b-466a-a0f4-feeaf089b3cc","Type":"ContainerStarted","Data":"48ad2a2f460d43d20d059a19e33fae58d0026685bca7e03615b9272f52f887f7"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.729899 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" event={"ID":"c9962d63-494b-466a-a0f4-feeaf089b3cc","Type":"ContainerStarted","Data":"27a0047317ce86b78096a495ecc5256752e6361743068ae8deb72c72edd86cdc"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.752440 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" event={"ID":"97424a80-37c4-4737-acac-de2182271f8d","Type":"ContainerStarted","Data":"6adde92a90fc826e5bf0d22cd359d1218700696a5c011c04ea076789d741f69f"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.752493 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" event={"ID":"97424a80-37c4-4737-acac-de2182271f8d","Type":"ContainerStarted","Data":"e534e4a51e60788bae49321f18f604576b820bdde16d774384b59339ee05df53"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.763897 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.764803 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.264788301 +0000 UTC m=+148.016578505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.780443 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" event={"ID":"9aacd7f5-b079-49b8-8792-9a27f38b29f9","Type":"ContainerStarted","Data":"bbbfe1b61c1a76387e51db3f4350a790b0487971b428d0a233452eba9b0cd581"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.780490 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" event={"ID":"9aacd7f5-b079-49b8-8792-9a27f38b29f9","Type":"ContainerStarted","Data":"f300802c6d71d9007c6a7389ff31cc053c56717cc57fdb5d4fbdcc74fdd0a58d"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.788423 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" event={"ID":"205da08e-f36b-4980-b220-8d7f456d1863","Type":"ContainerStarted","Data":"72bb439276d4502a4d69358fb9dab8a82466db79a4498aff742f48891529429b"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.788467 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" event={"ID":"205da08e-f36b-4980-b220-8d7f456d1863","Type":"ContainerStarted","Data":"029022285fddc185b58b0152eb21e6ec4deee7aec48c81499a228d789105263d"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.788689 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.792321 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-22k9m" podStartSLOduration=128.792307026 podStartE2EDuration="2m8.792307026s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:20.76553976 +0000 UTC m=+147.517329974" watchObservedRunningTime="2025-12-04 09:21:20.792307026 +0000 UTC m=+147.544097230" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.792912 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ft6rt" podStartSLOduration=128.792907422 podStartE2EDuration="2m8.792907422s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:20.789337761 +0000 UTC m=+147.541127975" watchObservedRunningTime="2025-12-04 09:21:20.792907422 +0000 UTC m=+147.544697626" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.824295 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"443585051be472a2d99e7f1c2521bffcf89855b2bcf5c7de31a9fca62d301f25"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.842019 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv68h" event={"ID":"305da833-682f-4eac-a135-e06598b0a179","Type":"ContainerStarted","Data":"1187acb00b4c499d6c2ac720ff6111e06b441a95ac20c95cb3898a87f54ffe82"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.856898 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.872603 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hgj8p" podStartSLOduration=128.872588154 podStartE2EDuration="2m8.872588154s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:20.820998941 +0000 UTC m=+147.572789145" watchObservedRunningTime="2025-12-04 09:21:20.872588154 +0000 UTC m=+147.624378358" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.873430 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.873955 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-drgr2" podStartSLOduration=128.873950529 podStartE2EDuration="2m8.873950529s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:20.8720605 +0000 UTC m=+147.623850704" watchObservedRunningTime="2025-12-04 09:21:20.873950529 +0000 UTC m=+147.625740733" Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.875101 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.375089628 +0000 UTC m=+148.126879832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.888979 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" event={"ID":"115e6f25-1465-48f0-bfde-c6d51d98b2c5","Type":"ContainerStarted","Data":"4238077c81cbbbd904f2c6f5ef0c8b1b95fafdf6e68fcb98d1f244bb54d4ac20"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.889023 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" event={"ID":"115e6f25-1465-48f0-bfde-c6d51d98b2c5","Type":"ContainerStarted","Data":"c10edb9d1b3739a4a195b84383f3ec05880fc96f2e399b2255e2dbb797db3c47"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.924925 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" event={"ID":"ca535148-cfdc-49e0-956b-c848b27b1a1a","Type":"ContainerStarted","Data":"90f121dca5aca58abddc64ca9ac81774952920d8405a748ab06cc65a4c31f084"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.975690 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" event={"ID":"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6","Type":"ContainerStarted","Data":"596ae383aff1e943f79f897eaaa67da297ab29960abbca7eb126c541b95f1872"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.975731 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" event={"ID":"9d3eee6b-070a-4ef5-b908-50fcc90e9ad6","Type":"ContainerStarted","Data":"03ab56d8f1a4d6d8b71d90cc4da8679f11f54a32939d1973a2265c416237e294"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.976272 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:20 crc kubenswrapper[4841]: E1204 09:21:20.976586 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.47656841 +0000 UTC m=+148.228358624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.995070 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" event={"ID":"bf485bd8-a407-4ead-ba46-ef55f540c63e","Type":"ContainerStarted","Data":"c34546e725d72b9d2c85696c233671d2ff153165274f96f9147a41d0e09e2097"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.995111 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" event={"ID":"bf485bd8-a407-4ead-ba46-ef55f540c63e","Type":"ContainerStarted","Data":"6e381ba41823755429a813bde2d1ecfc001e7665c8e68e17b57bc015adaac4d6"} Dec 04 09:21:20 crc kubenswrapper[4841]: I1204 09:21:20.995744 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.010448 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hf9vt" podStartSLOduration=129.010426775 podStartE2EDuration="2m9.010426775s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.00029962 +0000 UTC m=+147.752089834" watchObservedRunningTime="2025-12-04 09:21:21.010426775 +0000 UTC m=+147.762216979" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.029112 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" podStartSLOduration=129.029095147 podStartE2EDuration="2m9.029095147s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.029064267 +0000 UTC m=+147.780854471" watchObservedRunningTime="2025-12-04 09:21:21.029095147 +0000 UTC m=+147.780885351" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.040132 4841 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8x6h4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.040468 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" podUID="bf485bd8-a407-4ead-ba46-ef55f540c63e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.043778 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" event={"ID":"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc","Type":"ContainerStarted","Data":"a89033ecc7139e7972dbbf147e860e19c149b41926128ac961ba8eb66a35d90a"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.043809 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" event={"ID":"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc","Type":"ContainerStarted","Data":"3c6f59b97d9195eef753b848993fc526388d5911534d42428ec69991ee23bea7"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.076182 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:21 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:21 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:21 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.076227 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.078901 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.111823 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.611804346 +0000 UTC m=+148.363594550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.137182 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8v24d" event={"ID":"0fe777f4-152f-4849-88c8-d7be6ec7faf4","Type":"ContainerStarted","Data":"55231baca21b945c91a3056773b4040d97aa1375fc62ed70a290f02e3821e78a"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.137218 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8v24d" event={"ID":"0fe777f4-152f-4849-88c8-d7be6ec7faf4","Type":"ContainerStarted","Data":"ad8c5c385e9d3e712fbc5a87ea6ec958510cc9bb31e46cd4aa3fd9192a6270fa"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.151336 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" event={"ID":"34aba487-9181-4d88-9d12-ee2ff45ef349","Type":"ContainerStarted","Data":"9385ed1c97c5fbef470da7a22ba2c67594c93876896d210ca8c609440b56b082"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.152144 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.169532 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8v24d" podStartSLOduration=8.169514244 podStartE2EDuration="8.169514244s" podCreationTimestamp="2025-12-04 09:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.155555551 +0000 UTC m=+147.907345755" watchObservedRunningTime="2025-12-04 09:21:21.169514244 +0000 UTC m=+147.921304448" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.173971 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" event={"ID":"c45e4549-b737-472c-a86b-0b3ea110d7f1","Type":"ContainerStarted","Data":"d391114fcd627d99674455e0afc7ca47a8475f1f7b222ca64cc937dd52c3cbc4"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.174008 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" event={"ID":"c45e4549-b737-472c-a86b-0b3ea110d7f1","Type":"ContainerStarted","Data":"d77e3d4cbd9570e72fe7472c2db3d4f6c0853ca6571c9c2e7f3e94c212b9bc7c"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.174583 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.190359 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.191460 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.691445558 +0000 UTC m=+148.443235762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.191695 4841 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-867ft container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.191723 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" podUID="34aba487-9181-4d88-9d12-ee2ff45ef349" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.195876 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.196328 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.206147 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" podStartSLOduration=129.206126238 podStartE2EDuration="2m9.206126238s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.20104972 +0000 UTC m=+147.952839924" watchObservedRunningTime="2025-12-04 09:21:21.206126238 +0000 UTC m=+147.957916442" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.222621 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" event={"ID":"c07648d4-9c54-43ab-8446-762760141c94","Type":"ContainerStarted","Data":"ce15b52ebb1d374c896cc66a16fa14b8ba60d001518a9323fd4ede56c053e178"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.223031 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" event={"ID":"c07648d4-9c54-43ab-8446-762760141c94","Type":"ContainerStarted","Data":"ba2eccea9d2cd955df4d7f5e52357b61ee3cb32b8e9453cf8c2b51dcbc280603"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.224878 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" event={"ID":"bf42b1e0-9eb4-43b6-841e-e39370fdf05b","Type":"ContainerStarted","Data":"5699d842e985c81909852ce5acbf9d5c996e33d53ab0af035e19cf79d47d42b3"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.224909 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" event={"ID":"bf42b1e0-9eb4-43b6-841e-e39370fdf05b","Type":"ContainerStarted","Data":"f3a7261cfaee7480401487e0cf56bae2d46c5c4a45a83ed7bfab616dd8ca91f8"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.234434 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" event={"ID":"c201ee2d-0b9b-4737-b0d9-091ccd258e1e","Type":"ContainerStarted","Data":"a9a884c528e76a9a9ac1a8e5aac3ff7e275fe314b5a98ef4b4703f9bc5c54495"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.234751 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" event={"ID":"c201ee2d-0b9b-4737-b0d9-091ccd258e1e","Type":"ContainerStarted","Data":"75e76721101e952cf1af9859518cddb5984ff6242869d35bcfb35bc8cd1e0245"} Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.247076 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" podStartSLOduration=129.247061282 podStartE2EDuration="2m9.247061282s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.245182555 +0000 UTC m=+147.996972749" watchObservedRunningTime="2025-12-04 09:21:21.247061282 +0000 UTC m=+147.998851486" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.247997 4841 patch_prober.go:28] interesting pod/downloads-7954f5f757-h8dcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.248025 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h8dcv" podUID="4be459f8-8d18-441f-ae50-b56bdcf9367a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.248263 4841 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5rpzh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.248301 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.251191 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.284724 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tb8s6" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.293041 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" podStartSLOduration=129.293024193 podStartE2EDuration="2m9.293024193s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.291595867 +0000 UTC m=+148.043386081" watchObservedRunningTime="2025-12-04 09:21:21.293024193 +0000 UTC m=+148.044814397" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.293442 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.314583 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.814570627 +0000 UTC m=+148.566360821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.349999 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" podStartSLOduration=129.349983052 podStartE2EDuration="2m9.349983052s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.349098579 +0000 UTC m=+148.100888773" watchObservedRunningTime="2025-12-04 09:21:21.349983052 +0000 UTC m=+148.101773246" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.375617 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6zltn" podStartSLOduration=129.375601989 podStartE2EDuration="2m9.375601989s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:21.375353233 +0000 UTC m=+148.127143437" watchObservedRunningTime="2025-12-04 09:21:21.375601989 +0000 UTC m=+148.127392193" Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.394156 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.395508 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.895489771 +0000 UTC m=+148.647279975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.495361 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.495701 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:21.995690012 +0000 UTC m=+148.747480216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.596697 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.597389 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.09737382 +0000 UTC m=+148.849164024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.698575 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.698849 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.198838613 +0000 UTC m=+148.950628817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.800696 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.800963 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.300948992 +0000 UTC m=+149.052739196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:21 crc kubenswrapper[4841]: I1204 09:21:21.902336 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:21 crc kubenswrapper[4841]: E1204 09:21:21.902742 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.402724493 +0000 UTC m=+149.154514687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.003390 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.003650 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.503636192 +0000 UTC m=+149.255426396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.059072 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:22 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:22 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:22 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.059127 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.105227 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.105502 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.605489784 +0000 UTC m=+149.357279988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.206808 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.206972 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.706948577 +0000 UTC m=+149.458738781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.207142 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.207444 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.707431059 +0000 UTC m=+149.459221263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.256403 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7c64c13575b93688575c08cf9a9858bd69e8f1eece580ef17b5c084c04a65a3e"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.256454 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4ce5b32a1cc725190918d8575215596339c35af505a82769028db50efc059d4e"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.279549 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" event={"ID":"c45e4549-b737-472c-a86b-0b3ea110d7f1","Type":"ContainerStarted","Data":"83db6653add262e705d0080ff6d9414cbe93e501049d321b9d2aa78940f2ffa7"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.306714 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" event={"ID":"ca535148-cfdc-49e0-956b-c848b27b1a1a","Type":"ContainerStarted","Data":"8cd73b1b424c99b898c96d48704f42f1e0d725ff12ef00c14a069f6a12e500ca"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.311284 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.311711 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.811644891 +0000 UTC m=+149.563435095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.317986 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qr6vs" event={"ID":"c07648d4-9c54-43ab-8446-762760141c94","Type":"ContainerStarted","Data":"edbfcc44f38159c2e8b1bce651384858cdd3bd794c05b588af89fd7ecd9b7803"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.320157 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" event={"ID":"c17ae7aa-9cf4-40cf-8e9a-e17e1a9bd2bc","Type":"ContainerStarted","Data":"ffbcd3984b67eb6dbaf54c35b289cb0b75f727770954d69d0b431e7899b6f1c0"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.338224 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" event={"ID":"abeb1c4f-7981-43be-b8f8-4df803599a4d","Type":"ContainerStarted","Data":"65dfa7f642ce921cb892e0e41afa117db162716a5e317011bed75148d3cd0d38"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.338265 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" event={"ID":"abeb1c4f-7981-43be-b8f8-4df803599a4d","Type":"ContainerStarted","Data":"25fcf400f7a474e98d613c2c9ced465fba2e81645a50d7749ec53ce191c179b6"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.339728 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" event={"ID":"34aba487-9181-4d88-9d12-ee2ff45ef349","Type":"ContainerStarted","Data":"7e5f9563eae53bb9a6a806de4100f875e707888aff12bee5b641e97c3407bf1f"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.341424 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" event={"ID":"115e6f25-1465-48f0-bfde-c6d51d98b2c5","Type":"ContainerStarted","Data":"2d896114ec3f6a8aa3b1f10972a8b3ac60c38e9c319d5eb639d1c5dbb0fbe74e"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.342505 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"95e3760e5640647d80a20e4328713f6ca989a2341cf8e09f0364cfe4d8651673"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.342528 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b7c0abf7bc5fb722078b7ce6b3a0353672c80bd5b1ee90022570aceb9fa04f72"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.343509 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dd2e866c2a680c59499dfedd61ea81b9bbefa472ee1b1167700a11047fbffcf7"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.343822 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.348465 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv68h" event={"ID":"305da833-682f-4eac-a135-e06598b0a179","Type":"ContainerStarted","Data":"7a3c223303cdf0460458dec24cdc38e965f3cd179043cd92319869c5dacc3530"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.348507 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv68h" event={"ID":"305da833-682f-4eac-a135-e06598b0a179","Type":"ContainerStarted","Data":"25254d2f65c8d344d145aeb02defbaafa6bc692c4af674384f1dce00f367c686"} Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.348523 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.358097 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.369914 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h7sxn" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.390092 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ql5vn" podStartSLOduration=130.390079812 podStartE2EDuration="2m10.390079812s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:22.38801261 +0000 UTC m=+149.139802814" watchObservedRunningTime="2025-12-04 09:21:22.390079812 +0000 UTC m=+149.141870016" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.413585 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.431388 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:22.931371485 +0000 UTC m=+149.683161789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.490294 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8x6h4" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.514753 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.515113 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.0150947 +0000 UTC m=+149.766884904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.539436 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.540604 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.543467 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.562167 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.564141 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7vbcr" podStartSLOduration=130.564124978 podStartE2EDuration="2m10.564124978s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:22.560083536 +0000 UTC m=+149.311873740" watchObservedRunningTime="2025-12-04 09:21:22.564124978 +0000 UTC m=+149.315915182" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.619941 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.619995 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.620013 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.620035 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkmz\" (UniqueName: \"kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.620318 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.120307217 +0000 UTC m=+149.872097421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.622756 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jv68h" podStartSLOduration=9.622740679 podStartE2EDuration="9.622740679s" podCreationTimestamp="2025-12-04 09:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:22.620717547 +0000 UTC m=+149.372507751" watchObservedRunningTime="2025-12-04 09:21:22.622740679 +0000 UTC m=+149.374530883" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.648876 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" podStartSLOduration=130.648860798 podStartE2EDuration="2m10.648860798s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:22.645884083 +0000 UTC m=+149.397674287" watchObservedRunningTime="2025-12-04 09:21:22.648860798 +0000 UTC m=+149.400651002" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.704867 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.705692 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.707897 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.722235 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.722610 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.722659 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.722696 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbkmz\" (UniqueName: \"kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.723622 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.723745 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.223726729 +0000 UTC m=+149.975516923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.724016 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.733325 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.768353 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbkmz\" (UniqueName: \"kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz\") pod \"community-operators-n8h7m\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.823591 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.823631 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcjpl\" (UniqueName: \"kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.823694 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.823716 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.823988 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.323976911 +0000 UTC m=+150.075767115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.841931 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-867ft" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.856333 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.903717 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.905031 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.924317 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.924543 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.924567 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.424544431 +0000 UTC m=+150.176334635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.924596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcjpl\" (UniqueName: \"kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.924616 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.924676 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:22 crc kubenswrapper[4841]: E1204 09:21:22.924948 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.424937412 +0000 UTC m=+150.176727616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.925046 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.925277 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.930814 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:21:22 crc kubenswrapper[4841]: I1204 09:21:22.969651 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcjpl\" (UniqueName: \"kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl\") pod \"certified-operators-mlvxz\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.017679 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.025697 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.025839 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.5258159 +0000 UTC m=+150.277606104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.025912 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.025966 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.025995 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsxl\" (UniqueName: \"kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.026028 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.026290 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.526279141 +0000 UTC m=+150.278069345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.067170 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:23 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:23 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:23 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.067215 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.093889 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.094993 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.117312 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.127572 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.127953 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.627920449 +0000 UTC m=+150.379710653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128014 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128088 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128122 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgcw\" (UniqueName: \"kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128163 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128185 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128223 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.128256 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsxl\" (UniqueName: \"kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.129209 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.129490 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.129735 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.629727734 +0000 UTC m=+150.381517938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.172771 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.173812 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.178252 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsxl\" (UniqueName: \"kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl\") pod \"community-operators-zxq8g\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.181798 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.181976 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.186847 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.229478 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.229750 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230100 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230130 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230149 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqgcw\" (UniqueName: \"kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230180 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230203 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230602 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.230668 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.730654503 +0000 UTC m=+150.482444707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.230947 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.238609 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.248151 4841 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.257748 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqgcw\" (UniqueName: \"kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw\") pod \"certified-operators-kk6qx\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.333250 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.333307 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.333363 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.333716 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.833704946 +0000 UTC m=+150.585495150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.334067 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.355308 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.362022 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerStarted","Data":"8a253062ea02a22f77132019d187304cce40a69aa1b697c3bb33a51beb8535ec"} Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.366385 4841 generic.go:334] "Generic (PLEG): container finished" podID="bf42b1e0-9eb4-43b6-841e-e39370fdf05b" containerID="5699d842e985c81909852ce5acbf9d5c996e33d53ab0af035e19cf79d47d42b3" exitCode=0 Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.366459 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" event={"ID":"bf42b1e0-9eb4-43b6-841e-e39370fdf05b","Type":"ContainerDied","Data":"5699d842e985c81909852ce5acbf9d5c996e33d53ab0af035e19cf79d47d42b3"} Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.375128 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" event={"ID":"ca535148-cfdc-49e0-956b-c848b27b1a1a","Type":"ContainerStarted","Data":"33601e86f86f18be2e0b8e757427c538ef7507f291950325a64922c2c2423326"} Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.421359 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.436188 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.436312 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.936286967 +0000 UTC m=+150.688077171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.436437 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.440097 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:23.940083013 +0000 UTC m=+150.691873297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.515050 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.517073 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.531089 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:21:23 crc kubenswrapper[4841]: W1204 09:21:23.538503 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba167f5d_353d_4f7a_aba4_a8571b930170.slice/crio-c87144e561391870a08f11429fd5210b361a5e95b32777a858ef504ad3297ff2 WatchSource:0}: Error finding container c87144e561391870a08f11429fd5210b361a5e95b32777a858ef504ad3297ff2: Status 404 returned error can't find the container with id c87144e561391870a08f11429fd5210b361a5e95b32777a858ef504ad3297ff2 Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.538719 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.539081 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:24.039067652 +0000 UTC m=+150.790857856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: W1204 09:21:23.544826 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9edf6830_72af_441c_b5ff_c9b65706dcc0.slice/crio-79d479a63e7a3cc8be132ca68f322259ae5a5e57263bcf88df7d3481909676fa WatchSource:0}: Error finding container 79d479a63e7a3cc8be132ca68f322259ae5a5e57263bcf88df7d3481909676fa: Status 404 returned error can't find the container with id 79d479a63e7a3cc8be132ca68f322259ae5a5e57263bcf88df7d3481909676fa Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.641403 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.647710 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 09:21:24.147688716 +0000 UTC m=+150.899478920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x6s7w" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.650434 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.754845 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: E1204 09:21:23.755157 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 09:21:24.25510217 +0000 UTC m=+151.006892374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.825687 4841 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-04T09:21:23.248650788Z","Handler":null,"Name":""} Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.838468 4841 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.838505 4841 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.856596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.860479 4841 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.860510 4841 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.893005 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x6s7w\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.949494 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.957303 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 09:21:23 crc kubenswrapper[4841]: I1204 09:21:23.967539 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.027151 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.064045 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:24 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:24 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:24 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.064113 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.221120 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:21:24 crc kubenswrapper[4841]: W1204 09:21:24.226621 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9eeef2c_8b36_4fea_86d7_5732fad3d501.slice/crio-6a12f59d1bf0e4cd171fe039ce6db454193972f6af1f40972be9fdfe40af428a WatchSource:0}: Error finding container 6a12f59d1bf0e4cd171fe039ce6db454193972f6af1f40972be9fdfe40af428a: Status 404 returned error can't find the container with id 6a12f59d1bf0e4cd171fe039ce6db454193972f6af1f40972be9fdfe40af428a Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.383826 4841 generic.go:334] "Generic (PLEG): container finished" podID="24965104-a4c2-41bc-90af-19b331f214f0" containerID="5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc" exitCode=0 Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.383919 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerDied","Data":"5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.385554 4841 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.385684 4841 generic.go:334] "Generic (PLEG): container finished" podID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerID="ba486150b013e05e87baa97b0f220a73e61a83d7c3d57a1645a8af29ac8ffcfa" exitCode=0 Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.385708 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerDied","Data":"ba486150b013e05e87baa97b0f220a73e61a83d7c3d57a1645a8af29ac8ffcfa"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.385753 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerStarted","Data":"a2a1e32e45d9687ead79fede10573bb920087af149377601a12f8d3cc909a28a"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.389230 4841 generic.go:334] "Generic (PLEG): container finished" podID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerID="e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe" exitCode=0 Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.389502 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerDied","Data":"e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.389537 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerStarted","Data":"c87144e561391870a08f11429fd5210b361a5e95b32777a858ef504ad3297ff2"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.391594 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d","Type":"ContainerStarted","Data":"abd84ce7329a0b92e6fbdeeeda999f68ba6b6840fce1c32e83d827de43ec00e7"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.391640 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d","Type":"ContainerStarted","Data":"cea794257650a15dd7a09066ef2916e3a0bb02cba1fd6e217942cb5bc83d7ce0"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.400218 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" event={"ID":"ca535148-cfdc-49e0-956b-c848b27b1a1a","Type":"ContainerStarted","Data":"4b72ddc33619276c2bd44b206c35e9a3ca9359e2092772cba16a23dac4a60035"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.400259 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" event={"ID":"ca535148-cfdc-49e0-956b-c848b27b1a1a","Type":"ContainerStarted","Data":"1e24f8f25d38aed2dc9797f8bdc0a2f59a97e7f55c04d0abe78d9a5813e0e45f"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.402085 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" event={"ID":"c9eeef2c-8b36-4fea-86d7-5732fad3d501","Type":"ContainerStarted","Data":"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.402152 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" event={"ID":"c9eeef2c-8b36-4fea-86d7-5732fad3d501","Type":"ContainerStarted","Data":"6a12f59d1bf0e4cd171fe039ce6db454193972f6af1f40972be9fdfe40af428a"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.402407 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.407001 4841 generic.go:334] "Generic (PLEG): container finished" podID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerID="dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1" exitCode=0 Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.407076 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerDied","Data":"dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.407440 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerStarted","Data":"79d479a63e7a3cc8be132ca68f322259ae5a5e57263bcf88df7d3481909676fa"} Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.444012 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fhg4j" podStartSLOduration=10.443997479 podStartE2EDuration="10.443997479s" podCreationTimestamp="2025-12-04 09:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:24.442017769 +0000 UTC m=+151.193807983" watchObservedRunningTime="2025-12-04 09:21:24.443997479 +0000 UTC m=+151.195787683" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.522175 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.522665 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" podStartSLOduration=132.522643746 podStartE2EDuration="2m12.522643746s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:24.512743536 +0000 UTC m=+151.264533750" watchObservedRunningTime="2025-12-04 09:21:24.522643746 +0000 UTC m=+151.274433950" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.523522 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.526218 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.531620 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.544736 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.544718292 podStartE2EDuration="1.544718292s" podCreationTimestamp="2025-12-04 09:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:24.52837654 +0000 UTC m=+151.280166744" watchObservedRunningTime="2025-12-04 09:21:24.544718292 +0000 UTC m=+151.296508496" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.565138 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.565231 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.565367 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grb29\" (UniqueName: \"kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.599215 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666199 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rc5\" (UniqueName: \"kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5\") pod \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666545 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume\") pod \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666574 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume\") pod \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\" (UID: \"bf42b1e0-9eb4-43b6-841e-e39370fdf05b\") " Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666873 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666955 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.666986 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grb29\" (UniqueName: \"kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.667355 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.667463 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.667683 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf42b1e0-9eb4-43b6-841e-e39370fdf05b" (UID: "bf42b1e0-9eb4-43b6-841e-e39370fdf05b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.672626 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf42b1e0-9eb4-43b6-841e-e39370fdf05b" (UID: "bf42b1e0-9eb4-43b6-841e-e39370fdf05b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.672797 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5" (OuterVolumeSpecName: "kube-api-access-v8rc5") pod "bf42b1e0-9eb4-43b6-841e-e39370fdf05b" (UID: "bf42b1e0-9eb4-43b6-841e-e39370fdf05b"). InnerVolumeSpecName "kube-api-access-v8rc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.684830 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grb29\" (UniqueName: \"kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29\") pod \"redhat-marketplace-4lmrm\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.767816 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rc5\" (UniqueName: \"kubernetes.io/projected/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-kube-api-access-v8rc5\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.767845 4841 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.767854 4841 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf42b1e0-9eb4-43b6-841e-e39370fdf05b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.865592 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.896402 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:21:24 crc kubenswrapper[4841]: E1204 09:21:24.896647 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf42b1e0-9eb4-43b6-841e-e39370fdf05b" containerName="collect-profiles" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.896686 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf42b1e0-9eb4-43b6-841e-e39370fdf05b" containerName="collect-profiles" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.896839 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf42b1e0-9eb4-43b6-841e-e39370fdf05b" containerName="collect-profiles" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.897671 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.912627 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.970777 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.971169 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:24 crc kubenswrapper[4841]: I1204 09:21:24.971200 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbgbw\" (UniqueName: \"kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.057028 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:25 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:25 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:25 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.057078 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.072462 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.072521 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.072550 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbgbw\" (UniqueName: \"kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.073384 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.073634 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.076846 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.092641 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbgbw\" (UniqueName: \"kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw\") pod \"redhat-marketplace-cvfq2\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.249883 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.466670 4841 generic.go:334] "Generic (PLEG): container finished" podID="4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" containerID="abd84ce7329a0b92e6fbdeeeda999f68ba6b6840fce1c32e83d827de43ec00e7" exitCode=0 Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.466772 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d","Type":"ContainerDied","Data":"abd84ce7329a0b92e6fbdeeeda999f68ba6b6840fce1c32e83d827de43ec00e7"} Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.469048 4841 generic.go:334] "Generic (PLEG): container finished" podID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerID="87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7" exitCode=0 Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.469104 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerDied","Data":"87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7"} Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.469119 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerStarted","Data":"db4eeb6899dbed08b94bc02dfa9a80c2926173a40d0e80230d8f6c649852558a"} Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.472981 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.476186 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413995-4fqg9" event={"ID":"bf42b1e0-9eb4-43b6-841e-e39370fdf05b","Type":"ContainerDied","Data":"f3a7261cfaee7480401487e0cf56bae2d46c5c4a45a83ed7bfab616dd8ca91f8"} Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.476227 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a7261cfaee7480401487e0cf56bae2d46c5c4a45a83ed7bfab616dd8ca91f8" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.647720 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.776350 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:21:25 crc kubenswrapper[4841]: W1204 09:21:25.797662 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c05603_129b_4fb9_b0b9_b976b8ca5a60.slice/crio-5f2da1d05d2118bcd2a4d4adeebdee2358b9390db4d19ea0cdd49d498d12a9fa WatchSource:0}: Error finding container 5f2da1d05d2118bcd2a4d4adeebdee2358b9390db4d19ea0cdd49d498d12a9fa: Status 404 returned error can't find the container with id 5f2da1d05d2118bcd2a4d4adeebdee2358b9390db4d19ea0cdd49d498d12a9fa Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.892906 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.893847 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.895985 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 09:21:25 crc kubenswrapper[4841]: I1204 09:21:25.903143 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.043324 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.043632 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlxd2\" (UniqueName: \"kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.043706 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.057482 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:26 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:26 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:26 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.057554 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.144510 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.144559 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlxd2\" (UniqueName: \"kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.144627 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.145329 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.145467 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.173467 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlxd2\" (UniqueName: \"kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2\") pod \"redhat-operators-vw6k8\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.177672 4841 patch_prober.go:28] interesting pod/downloads-7954f5f757-h8dcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.177729 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h8dcv" podUID="4be459f8-8d18-441f-ae50-b56bdcf9367a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.183450 4841 patch_prober.go:28] interesting pod/downloads-7954f5f757-h8dcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.183506 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h8dcv" podUID="4be459f8-8d18-441f-ae50-b56bdcf9367a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.222023 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.248989 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.249047 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.251188 4841 patch_prober.go:28] interesting pod/console-f9d7485db-prd7v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.251257 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-prd7v" podUID="87a01003-7343-4fba-ada1-2be090ebc0dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.293110 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.294105 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.312178 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.449001 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd2k\" (UniqueName: \"kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.449114 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.449151 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.486461 4841 generic.go:334] "Generic (PLEG): container finished" podID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerID="47e48946c64e82f284d8475a58a55574b11655e2b27a2d19c9049a931327e8f6" exitCode=0 Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.486521 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerDied","Data":"47e48946c64e82f284d8475a58a55574b11655e2b27a2d19c9049a931327e8f6"} Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.486581 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerStarted","Data":"5f2da1d05d2118bcd2a4d4adeebdee2358b9390db4d19ea0cdd49d498d12a9fa"} Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.550179 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd2k\" (UniqueName: \"kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.550264 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.550294 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.550976 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.551457 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.604603 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd2k\" (UniqueName: \"kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k\") pod \"redhat-operators-lpz5g\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.614881 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.711231 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:21:26 crc kubenswrapper[4841]: W1204 09:21:26.723644 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6275695a_0b4a_4e12_affd_bfabdffcf529.slice/crio-237395d05f8760e213c63f4f81d6ccf5aa73620f966713efd046faa466721047 WatchSource:0}: Error finding container 237395d05f8760e213c63f4f81d6ccf5aa73620f966713efd046faa466721047: Status 404 returned error can't find the container with id 237395d05f8760e213c63f4f81d6ccf5aa73620f966713efd046faa466721047 Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.814562 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.851400 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:21:26 crc kubenswrapper[4841]: W1204 09:21:26.859544 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c654bd7_2e20_4c59_91d8_17f232e97d35.slice/crio-fc8b494ec4f20ef815f87f80f6ceb4be3f974d80d5071de5bae8537425173b0a WatchSource:0}: Error finding container fc8b494ec4f20ef815f87f80f6ceb4be3f974d80d5071de5bae8537425173b0a: Status 404 returned error can't find the container with id fc8b494ec4f20ef815f87f80f6ceb4be3f974d80d5071de5bae8537425173b0a Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.905386 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.905438 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.917098 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.956437 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access\") pod \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.956744 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir\") pod \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\" (UID: \"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d\") " Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.957059 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" (UID: "4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:21:26 crc kubenswrapper[4841]: I1204 09:21:26.970298 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" (UID: "4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.053972 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.057250 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:27 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:27 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:27 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.057288 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.057585 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.057601 4841 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.512543 4841 generic.go:334] "Generic (PLEG): container finished" podID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerID="cdea66786ae266df5b1b4399d96ff1a23ce67a21d615dc801ab973f872945dea" exitCode=0 Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.512611 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerDied","Data":"cdea66786ae266df5b1b4399d96ff1a23ce67a21d615dc801ab973f872945dea"} Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.512636 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerStarted","Data":"fc8b494ec4f20ef815f87f80f6ceb4be3f974d80d5071de5bae8537425173b0a"} Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.524300 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d","Type":"ContainerDied","Data":"cea794257650a15dd7a09066ef2916e3a0bb02cba1fd6e217942cb5bc83d7ce0"} Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.524354 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cea794257650a15dd7a09066ef2916e3a0bb02cba1fd6e217942cb5bc83d7ce0" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.524329 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.548926 4841 generic.go:334] "Generic (PLEG): container finished" podID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerID="4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122" exitCode=0 Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.549021 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerDied","Data":"4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122"} Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.549062 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerStarted","Data":"237395d05f8760e213c63f4f81d6ccf5aa73620f966713efd046faa466721047"} Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.554554 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xfscp" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.652110 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 09:21:27 crc kubenswrapper[4841]: E1204 09:21:27.652374 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" containerName="pruner" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.652388 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" containerName="pruner" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.652508 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9345c2-9e07-48f8-9d59-aa1b54dbaf3d" containerName="pruner" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.653026 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.654842 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.654979 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.655184 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.766086 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.766239 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.867230 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.867378 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.867473 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.899963 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:27 crc kubenswrapper[4841]: I1204 09:21:27.972857 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:28 crc kubenswrapper[4841]: I1204 09:21:28.058903 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:28 crc kubenswrapper[4841]: [-]has-synced failed: reason withheld Dec 04 09:21:28 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:28 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:28 crc kubenswrapper[4841]: I1204 09:21:28.059157 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:28 crc kubenswrapper[4841]: I1204 09:21:28.606722 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 09:21:28 crc kubenswrapper[4841]: W1204 09:21:28.642416 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod54e7c152_fd88_4ce0_843e_e811b298516e.slice/crio-43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5 WatchSource:0}: Error finding container 43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5: Status 404 returned error can't find the container with id 43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5 Dec 04 09:21:29 crc kubenswrapper[4841]: I1204 09:21:29.057961 4841 patch_prober.go:28] interesting pod/router-default-5444994796-xjqtj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 09:21:29 crc kubenswrapper[4841]: [+]has-synced ok Dec 04 09:21:29 crc kubenswrapper[4841]: [+]process-running ok Dec 04 09:21:29 crc kubenswrapper[4841]: healthz check failed Dec 04 09:21:29 crc kubenswrapper[4841]: I1204 09:21:29.058021 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjqtj" podUID="8ac74ed8-7277-42a5-b155-96f047e1b662" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 09:21:29 crc kubenswrapper[4841]: I1204 09:21:29.562000 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"54e7c152-fd88-4ce0-843e-e811b298516e","Type":"ContainerStarted","Data":"43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5"} Dec 04 09:21:29 crc kubenswrapper[4841]: I1204 09:21:29.628643 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:21:30 crc kubenswrapper[4841]: I1204 09:21:30.057033 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:30 crc kubenswrapper[4841]: I1204 09:21:30.060038 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xjqtj" Dec 04 09:21:30 crc kubenswrapper[4841]: I1204 09:21:30.569979 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"54e7c152-fd88-4ce0-843e-e811b298516e","Type":"ContainerStarted","Data":"f30285f52e3efee26e22b68b8ad056dad4c9acfc003d49e407af8625a3329ab3"} Dec 04 09:21:30 crc kubenswrapper[4841]: I1204 09:21:30.587779 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.5877472839999998 podStartE2EDuration="3.587747284s" podCreationTimestamp="2025-12-04 09:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:30.584422211 +0000 UTC m=+157.336212415" watchObservedRunningTime="2025-12-04 09:21:30.587747284 +0000 UTC m=+157.339537488" Dec 04 09:21:31 crc kubenswrapper[4841]: I1204 09:21:31.579589 4841 generic.go:334] "Generic (PLEG): container finished" podID="54e7c152-fd88-4ce0-843e-e811b298516e" containerID="f30285f52e3efee26e22b68b8ad056dad4c9acfc003d49e407af8625a3329ab3" exitCode=0 Dec 04 09:21:31 crc kubenswrapper[4841]: I1204 09:21:31.579688 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"54e7c152-fd88-4ce0-843e-e811b298516e","Type":"ContainerDied","Data":"f30285f52e3efee26e22b68b8ad056dad4c9acfc003d49e407af8625a3329ab3"} Dec 04 09:21:32 crc kubenswrapper[4841]: I1204 09:21:32.109677 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jv68h" Dec 04 09:21:33 crc kubenswrapper[4841]: I1204 09:21:33.759722 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:33 crc kubenswrapper[4841]: I1204 09:21:33.770382 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e74f87eb-fb70-4679-93f8-ebe5de564484-metrics-certs\") pod \"network-metrics-daemon-7t7hn\" (UID: \"e74f87eb-fb70-4679-93f8-ebe5de564484\") " pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:33 crc kubenswrapper[4841]: I1204 09:21:33.849841 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7t7hn" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.259421 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.379044 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access\") pod \"54e7c152-fd88-4ce0-843e-e811b298516e\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.379087 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir\") pod \"54e7c152-fd88-4ce0-843e-e811b298516e\" (UID: \"54e7c152-fd88-4ce0-843e-e811b298516e\") " Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.379268 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54e7c152-fd88-4ce0-843e-e811b298516e" (UID: "54e7c152-fd88-4ce0-843e-e811b298516e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.379542 4841 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e7c152-fd88-4ce0-843e-e811b298516e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.385539 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54e7c152-fd88-4ce0-843e-e811b298516e" (UID: "54e7c152-fd88-4ce0-843e-e811b298516e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.480606 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e7c152-fd88-4ce0-843e-e811b298516e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.607525 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7t7hn"] Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.607805 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"54e7c152-fd88-4ce0-843e-e811b298516e","Type":"ContainerDied","Data":"43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5"} Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.607826 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a5516f9c6b02a4a8a5f182b2218fa7e3b4b709527746cf04f5a16c7f1ec2d5" Dec 04 09:21:35 crc kubenswrapper[4841]: I1204 09:21:35.607595 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 09:21:35 crc kubenswrapper[4841]: W1204 09:21:35.626237 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode74f87eb_fb70_4679_93f8_ebe5de564484.slice/crio-e64368688e1e7e77cfdc23c20f5a1a93cb20124662972c01deea9a861c80a0a1 WatchSource:0}: Error finding container e64368688e1e7e77cfdc23c20f5a1a93cb20124662972c01deea9a861c80a0a1: Status 404 returned error can't find the container with id e64368688e1e7e77cfdc23c20f5a1a93cb20124662972c01deea9a861c80a0a1 Dec 04 09:21:36 crc kubenswrapper[4841]: I1204 09:21:36.183888 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-h8dcv" Dec 04 09:21:36 crc kubenswrapper[4841]: I1204 09:21:36.387349 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:36 crc kubenswrapper[4841]: I1204 09:21:36.392523 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-prd7v" Dec 04 09:21:36 crc kubenswrapper[4841]: I1204 09:21:36.615372 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" event={"ID":"e74f87eb-fb70-4679-93f8-ebe5de564484","Type":"ContainerStarted","Data":"8b2ea70b0d2b7c25537fca768e5a5d11ac462fdef893cdcd991cf16158944deb"} Dec 04 09:21:36 crc kubenswrapper[4841]: I1204 09:21:36.615421 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" event={"ID":"e74f87eb-fb70-4679-93f8-ebe5de564484","Type":"ContainerStarted","Data":"e64368688e1e7e77cfdc23c20f5a1a93cb20124662972c01deea9a861c80a0a1"} Dec 04 09:21:44 crc kubenswrapper[4841]: I1204 09:21:44.038041 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:21:50 crc kubenswrapper[4841]: I1204 09:21:50.498210 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:21:50 crc kubenswrapper[4841]: I1204 09:21:50.498611 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:21:51 crc kubenswrapper[4841]: E1204 09:21:51.624753 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 04 09:21:51 crc kubenswrapper[4841]: E1204 09:21:51.624941 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbkmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-n8h7m_openshift-marketplace(24965104-a4c2-41bc-90af-19b331f214f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:51 crc kubenswrapper[4841]: E1204 09:21:51.626051 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-n8h7m" podUID="24965104-a4c2-41bc-90af-19b331f214f0" Dec 04 09:21:52 crc kubenswrapper[4841]: E1204 09:21:52.905730 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-n8h7m" podUID="24965104-a4c2-41bc-90af-19b331f214f0" Dec 04 09:21:53 crc kubenswrapper[4841]: E1204 09:21:53.912977 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 09:21:53 crc kubenswrapper[4841]: E1204 09:21:53.913310 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbgbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cvfq2_openshift-marketplace(53c05603-129b-4fb9-b0b9-b976b8ca5a60): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:53 crc kubenswrapper[4841]: E1204 09:21:53.914718 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cvfq2" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" Dec 04 09:21:55 crc kubenswrapper[4841]: E1204 09:21:55.301852 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cvfq2" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" Dec 04 09:21:55 crc kubenswrapper[4841]: E1204 09:21:55.389835 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 04 09:21:55 crc kubenswrapper[4841]: E1204 09:21:55.390017 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcjpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mlvxz_openshift-marketplace(9edf6830-72af-441c-b5ff-c9b65706dcc0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:55 crc kubenswrapper[4841]: E1204 09:21:55.391188 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mlvxz" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" Dec 04 09:21:57 crc kubenswrapper[4841]: I1204 09:21:57.089538 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7mdz" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.329110 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mlvxz" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.412949 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.413360 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tnsxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zxq8g_openshift-marketplace(ba167f5d-353d-4f7a-aba4-a8571b930170): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.414579 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zxq8g" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.432043 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.432205 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtd2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lpz5g_openshift-marketplace(6c654bd7-2e20-4c59-91d8-17f232e97d35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.433696 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lpz5g" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.461911 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.462123 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grb29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4lmrm_openshift-marketplace(b148c13b-ac9d-4df8-9960-7a98df30bc57): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.463282 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4lmrm" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.491582 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.491723 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqgcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kk6qx_openshift-marketplace(54eced9a-177d-46f7-b7f5-d388d401ada9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.492900 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kk6qx" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" Dec 04 09:21:58 crc kubenswrapper[4841]: I1204 09:21:58.736053 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7t7hn" event={"ID":"e74f87eb-fb70-4679-93f8-ebe5de564484","Type":"ContainerStarted","Data":"57b37df9bcfee57775bfe7dc4741e8b657452bcdb1992396efe513d2194921b9"} Dec 04 09:21:58 crc kubenswrapper[4841]: I1204 09:21:58.737821 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerStarted","Data":"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2"} Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.741105 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kk6qx" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.741155 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4lmrm" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.741208 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zxq8g" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" Dec 04 09:21:58 crc kubenswrapper[4841]: E1204 09:21:58.742353 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lpz5g" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" Dec 04 09:21:58 crc kubenswrapper[4841]: I1204 09:21:58.756961 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7t7hn" podStartSLOduration=166.756929126 podStartE2EDuration="2m46.756929126s" podCreationTimestamp="2025-12-04 09:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:21:58.755426638 +0000 UTC m=+185.507216912" watchObservedRunningTime="2025-12-04 09:21:58.756929126 +0000 UTC m=+185.508719370" Dec 04 09:21:59 crc kubenswrapper[4841]: I1204 09:21:59.744736 4841 generic.go:334] "Generic (PLEG): container finished" podID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerID="253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2" exitCode=0 Dec 04 09:21:59 crc kubenswrapper[4841]: I1204 09:21:59.744807 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerDied","Data":"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2"} Dec 04 09:21:59 crc kubenswrapper[4841]: I1204 09:21:59.787392 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 09:22:00 crc kubenswrapper[4841]: I1204 09:22:00.750818 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerStarted","Data":"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9"} Dec 04 09:22:00 crc kubenswrapper[4841]: I1204 09:22:00.774178 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vw6k8" podStartSLOduration=2.87709902 podStartE2EDuration="35.774159277s" podCreationTimestamp="2025-12-04 09:21:25 +0000 UTC" firstStartedPulling="2025-12-04 09:21:27.550659965 +0000 UTC m=+154.302450169" lastFinishedPulling="2025-12-04 09:22:00.447720222 +0000 UTC m=+187.199510426" observedRunningTime="2025-12-04 09:22:00.772655109 +0000 UTC m=+187.524445313" watchObservedRunningTime="2025-12-04 09:22:00.774159277 +0000 UTC m=+187.525949501" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.418558 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 09:22:03 crc kubenswrapper[4841]: E1204 09:22:03.418992 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e7c152-fd88-4ce0-843e-e811b298516e" containerName="pruner" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.419003 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e7c152-fd88-4ce0-843e-e811b298516e" containerName="pruner" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.419098 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e7c152-fd88-4ce0-843e-e811b298516e" containerName="pruner" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.419471 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.422189 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.422517 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.438754 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.595082 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.595227 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.696596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.696671 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.696733 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.718446 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.746024 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:03 crc kubenswrapper[4841]: I1204 09:22:03.945781 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 09:22:04 crc kubenswrapper[4841]: I1204 09:22:04.772115 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"59a09c6e-132a-40b4-a53c-dc1c337de31a","Type":"ContainerStarted","Data":"0a1f4ed7bec186b8a59f7387006221434fff25b816e8c9d350b102a96da9d62d"} Dec 04 09:22:04 crc kubenswrapper[4841]: I1204 09:22:04.772379 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"59a09c6e-132a-40b4-a53c-dc1c337de31a","Type":"ContainerStarted","Data":"21a224fad0f7b595aad1878921b73f97e690de192ebf154e74ebb03a70630d13"} Dec 04 09:22:05 crc kubenswrapper[4841]: I1204 09:22:05.778681 4841 generic.go:334] "Generic (PLEG): container finished" podID="59a09c6e-132a-40b4-a53c-dc1c337de31a" containerID="0a1f4ed7bec186b8a59f7387006221434fff25b816e8c9d350b102a96da9d62d" exitCode=0 Dec 04 09:22:05 crc kubenswrapper[4841]: I1204 09:22:05.778772 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"59a09c6e-132a-40b4-a53c-dc1c337de31a","Type":"ContainerDied","Data":"0a1f4ed7bec186b8a59f7387006221434fff25b816e8c9d350b102a96da9d62d"} Dec 04 09:22:06 crc kubenswrapper[4841]: I1204 09:22:06.223278 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:22:06 crc kubenswrapper[4841]: I1204 09:22:06.223339 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:22:06 crc kubenswrapper[4841]: I1204 09:22:06.337894 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:22:06 crc kubenswrapper[4841]: I1204 09:22:06.826658 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:22:06 crc kubenswrapper[4841]: I1204 09:22:06.993667 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.137384 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access\") pod \"59a09c6e-132a-40b4-a53c-dc1c337de31a\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.137544 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir\") pod \"59a09c6e-132a-40b4-a53c-dc1c337de31a\" (UID: \"59a09c6e-132a-40b4-a53c-dc1c337de31a\") " Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.137847 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "59a09c6e-132a-40b4-a53c-dc1c337de31a" (UID: "59a09c6e-132a-40b4-a53c-dc1c337de31a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.142911 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "59a09c6e-132a-40b4-a53c-dc1c337de31a" (UID: "59a09c6e-132a-40b4-a53c-dc1c337de31a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.239323 4841 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a09c6e-132a-40b4-a53c-dc1c337de31a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.239358 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59a09c6e-132a-40b4-a53c-dc1c337de31a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.789112 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"59a09c6e-132a-40b4-a53c-dc1c337de31a","Type":"ContainerDied","Data":"21a224fad0f7b595aad1878921b73f97e690de192ebf154e74ebb03a70630d13"} Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.789156 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a224fad0f7b595aad1878921b73f97e690de192ebf154e74ebb03a70630d13" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.789136 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 09:22:07 crc kubenswrapper[4841]: I1204 09:22:07.790590 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerStarted","Data":"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867"} Dec 04 09:22:08 crc kubenswrapper[4841]: I1204 09:22:08.797113 4841 generic.go:334] "Generic (PLEG): container finished" podID="24965104-a4c2-41bc-90af-19b331f214f0" containerID="f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867" exitCode=0 Dec 04 09:22:08 crc kubenswrapper[4841]: I1204 09:22:08.797196 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerDied","Data":"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867"} Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.632374 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 09:22:09 crc kubenswrapper[4841]: E1204 09:22:09.632638 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a09c6e-132a-40b4-a53c-dc1c337de31a" containerName="pruner" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.632653 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a09c6e-132a-40b4-a53c-dc1c337de31a" containerName="pruner" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.632822 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a09c6e-132a-40b4-a53c-dc1c337de31a" containerName="pruner" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.633276 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.633362 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.645128 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.645425 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.770549 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.770601 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.770671 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.872094 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.872188 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.872205 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.872277 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.872280 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.897580 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:09 crc kubenswrapper[4841]: I1204 09:22:09.952815 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:10 crc kubenswrapper[4841]: I1204 09:22:10.150017 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 09:22:10 crc kubenswrapper[4841]: I1204 09:22:10.809118 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f8de332-17f0-4c3e-b1d6-419567acdb7d","Type":"ContainerStarted","Data":"05c0fe5d96c02b129d118bb327604f9b5c66e645c12e003ec074a844ecb719a7"} Dec 04 09:22:11 crc kubenswrapper[4841]: I1204 09:22:11.815314 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f8de332-17f0-4c3e-b1d6-419567acdb7d","Type":"ContainerStarted","Data":"5edc26cb7a7481ec0df0a7ed8702f8b174b62338ce6b859ecb276288707e1e3f"} Dec 04 09:22:11 crc kubenswrapper[4841]: I1204 09:22:11.832528 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.832508632 podStartE2EDuration="2.832508632s" podCreationTimestamp="2025-12-04 09:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:22:11.830420118 +0000 UTC m=+198.582210322" watchObservedRunningTime="2025-12-04 09:22:11.832508632 +0000 UTC m=+198.584298836" Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.831790 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerStarted","Data":"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a"} Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.835899 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerStarted","Data":"7decee1316b94023f29c7383560d0b609ab465972eb2836a29ef7a55f1d4e867"} Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.838667 4841 generic.go:334] "Generic (PLEG): container finished" podID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerID="e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2" exitCode=0 Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.838898 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerDied","Data":"e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2"} Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.853335 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerStarted","Data":"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384"} Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.854134 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n8h7m" podStartSLOduration=3.50616299 podStartE2EDuration="50.854113789s" podCreationTimestamp="2025-12-04 09:21:22 +0000 UTC" firstStartedPulling="2025-12-04 09:21:24.385220855 +0000 UTC m=+151.137011079" lastFinishedPulling="2025-12-04 09:22:11.733171664 +0000 UTC m=+198.484961878" observedRunningTime="2025-12-04 09:22:12.851018827 +0000 UTC m=+199.602809031" watchObservedRunningTime="2025-12-04 09:22:12.854113789 +0000 UTC m=+199.605903993" Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.856275 4841 generic.go:334] "Generic (PLEG): container finished" podID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerID="f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf" exitCode=0 Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.856336 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerDied","Data":"f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf"} Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.856945 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.856997 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.863486 4841 generic.go:334] "Generic (PLEG): container finished" podID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerID="cb548df9d46edbe926be5b63cac648d3b5bda2459b25855a334998ae12bf7b16" exitCode=0 Dec 04 09:22:12 crc kubenswrapper[4841]: I1204 09:22:12.863579 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerDied","Data":"cb548df9d46edbe926be5b63cac648d3b5bda2459b25855a334998ae12bf7b16"} Dec 04 09:22:13 crc kubenswrapper[4841]: I1204 09:22:13.870388 4841 generic.go:334] "Generic (PLEG): container finished" podID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerID="7decee1316b94023f29c7383560d0b609ab465972eb2836a29ef7a55f1d4e867" exitCode=0 Dec 04 09:22:13 crc kubenswrapper[4841]: I1204 09:22:13.870461 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerDied","Data":"7decee1316b94023f29c7383560d0b609ab465972eb2836a29ef7a55f1d4e867"} Dec 04 09:22:13 crc kubenswrapper[4841]: I1204 09:22:13.872774 4841 generic.go:334] "Generic (PLEG): container finished" podID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerID="9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384" exitCode=0 Dec 04 09:22:13 crc kubenswrapper[4841]: I1204 09:22:13.872820 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerDied","Data":"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384"} Dec 04 09:22:13 crc kubenswrapper[4841]: I1204 09:22:13.909277 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n8h7m" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="registry-server" probeResult="failure" output=< Dec 04 09:22:13 crc kubenswrapper[4841]: timeout: failed to connect service ":50051" within 1s Dec 04 09:22:13 crc kubenswrapper[4841]: > Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.882730 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerStarted","Data":"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.886287 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerStarted","Data":"a237863b27edea9bbfa1bff9b83f419606c0718a3da755a21ae4659b8710ae4d"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.888650 4841 generic.go:334] "Generic (PLEG): container finished" podID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerID="4883bb151ef4211b82720859995372f972ce00644ea2605ba325bd3b9758dc3f" exitCode=0 Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.888714 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerDied","Data":"4883bb151ef4211b82720859995372f972ce00644ea2605ba325bd3b9758dc3f"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.892132 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerStarted","Data":"d86a52dadf89ca771319f568a6ae8c7613ca0ee04a20b96d6c8c1318daf0bb79"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.894964 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerStarted","Data":"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.898184 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerStarted","Data":"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8"} Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.903097 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mlvxz" podStartSLOduration=3.163316557 podStartE2EDuration="52.903079621s" podCreationTimestamp="2025-12-04 09:21:22 +0000 UTC" firstStartedPulling="2025-12-04 09:21:24.409385105 +0000 UTC m=+151.161175319" lastFinishedPulling="2025-12-04 09:22:14.149148179 +0000 UTC m=+200.900938383" observedRunningTime="2025-12-04 09:22:14.901017428 +0000 UTC m=+201.652807652" watchObservedRunningTime="2025-12-04 09:22:14.903079621 +0000 UTC m=+201.654869825" Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.942356 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cvfq2" podStartSLOduration=3.10502828 podStartE2EDuration="50.942338464s" podCreationTimestamp="2025-12-04 09:21:24 +0000 UTC" firstStartedPulling="2025-12-04 09:21:26.489419931 +0000 UTC m=+153.241210125" lastFinishedPulling="2025-12-04 09:22:14.326730105 +0000 UTC m=+201.078520309" observedRunningTime="2025-12-04 09:22:14.937734814 +0000 UTC m=+201.689525018" watchObservedRunningTime="2025-12-04 09:22:14.942338464 +0000 UTC m=+201.694128668" Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.965878 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxq8g" podStartSLOduration=3.330463977 podStartE2EDuration="52.965852837s" podCreationTimestamp="2025-12-04 09:21:22 +0000 UTC" firstStartedPulling="2025-12-04 09:21:24.390723553 +0000 UTC m=+151.142513757" lastFinishedPulling="2025-12-04 09:22:14.026112413 +0000 UTC m=+200.777902617" observedRunningTime="2025-12-04 09:22:14.964309396 +0000 UTC m=+201.716099610" watchObservedRunningTime="2025-12-04 09:22:14.965852837 +0000 UTC m=+201.717643051" Dec 04 09:22:14 crc kubenswrapper[4841]: I1204 09:22:14.986077 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kk6qx" podStartSLOduration=1.784941895 podStartE2EDuration="51.986055432s" podCreationTimestamp="2025-12-04 09:21:23 +0000 UTC" firstStartedPulling="2025-12-04 09:21:24.387564083 +0000 UTC m=+151.139354287" lastFinishedPulling="2025-12-04 09:22:14.58867763 +0000 UTC m=+201.340467824" observedRunningTime="2025-12-04 09:22:14.983164747 +0000 UTC m=+201.734954971" watchObservedRunningTime="2025-12-04 09:22:14.986055432 +0000 UTC m=+201.737845636" Dec 04 09:22:15 crc kubenswrapper[4841]: I1204 09:22:15.004002 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4lmrm" podStartSLOduration=2.08466108 podStartE2EDuration="51.00398337s" podCreationTimestamp="2025-12-04 09:21:24 +0000 UTC" firstStartedPulling="2025-12-04 09:21:25.490997513 +0000 UTC m=+152.242787717" lastFinishedPulling="2025-12-04 09:22:14.410319803 +0000 UTC m=+201.162110007" observedRunningTime="2025-12-04 09:22:15.001437943 +0000 UTC m=+201.753228177" watchObservedRunningTime="2025-12-04 09:22:15.00398337 +0000 UTC m=+201.755773574" Dec 04 09:22:15 crc kubenswrapper[4841]: I1204 09:22:15.250797 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:15 crc kubenswrapper[4841]: I1204 09:22:15.250989 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:15 crc kubenswrapper[4841]: I1204 09:22:15.906472 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerStarted","Data":"9c8673db818aea09f81bd5fae90e3265b27d97f183eb5cfee774358c7c00677e"} Dec 04 09:22:15 crc kubenswrapper[4841]: I1204 09:22:15.927134 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpz5g" podStartSLOduration=2.080143099 podStartE2EDuration="49.92711423s" podCreationTimestamp="2025-12-04 09:21:26 +0000 UTC" firstStartedPulling="2025-12-04 09:21:27.513781664 +0000 UTC m=+154.265571858" lastFinishedPulling="2025-12-04 09:22:15.360752785 +0000 UTC m=+202.112542989" observedRunningTime="2025-12-04 09:22:15.926498975 +0000 UTC m=+202.678289179" watchObservedRunningTime="2025-12-04 09:22:15.92711423 +0000 UTC m=+202.678904434" Dec 04 09:22:16 crc kubenswrapper[4841]: I1204 09:22:16.289292 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cvfq2" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="registry-server" probeResult="failure" output=< Dec 04 09:22:16 crc kubenswrapper[4841]: timeout: failed to connect service ":50051" within 1s Dec 04 09:22:16 crc kubenswrapper[4841]: > Dec 04 09:22:16 crc kubenswrapper[4841]: I1204 09:22:16.615314 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:16 crc kubenswrapper[4841]: I1204 09:22:16.615365 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:17 crc kubenswrapper[4841]: I1204 09:22:17.656450 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpz5g" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="registry-server" probeResult="failure" output=< Dec 04 09:22:17 crc kubenswrapper[4841]: timeout: failed to connect service ":50051" within 1s Dec 04 09:22:17 crc kubenswrapper[4841]: > Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.497326 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.497705 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.497787 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.498594 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.498697 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e" gracePeriod=600 Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.936645 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e" exitCode=0 Dec 04 09:22:20 crc kubenswrapper[4841]: I1204 09:22:20.936748 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e"} Dec 04 09:22:21 crc kubenswrapper[4841]: I1204 09:22:21.945272 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2"} Dec 04 09:22:22 crc kubenswrapper[4841]: I1204 09:22:22.930810 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:22:22 crc kubenswrapper[4841]: I1204 09:22:22.968868 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.023713 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.023805 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.060265 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.238868 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.238958 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.291447 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.422129 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.422302 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:23 crc kubenswrapper[4841]: I1204 09:22:23.482102 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.004125 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.007929 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.011304 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.866625 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.866789 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:22:24 crc kubenswrapper[4841]: I1204 09:22:24.930078 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.027099 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.292043 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.351009 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.372364 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.964033 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.972307 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kk6qx" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="registry-server" containerID="cri-o://d86a52dadf89ca771319f568a6ae8c7613ca0ee04a20b96d6c8c1318daf0bb79" gracePeriod=2 Dec 04 09:22:25 crc kubenswrapper[4841]: I1204 09:22:25.972442 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxq8g" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="registry-server" containerID="cri-o://4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad" gracePeriod=2 Dec 04 09:22:26 crc kubenswrapper[4841]: I1204 09:22:26.678964 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:26 crc kubenswrapper[4841]: I1204 09:22:26.949692 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.765055 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.765601 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cvfq2" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="registry-server" containerID="cri-o://a237863b27edea9bbfa1bff9b83f419606c0718a3da755a21ae4659b8710ae4d" gracePeriod=2 Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.921517 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.986280 4841 generic.go:334] "Generic (PLEG): container finished" podID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerID="d86a52dadf89ca771319f568a6ae8c7613ca0ee04a20b96d6c8c1318daf0bb79" exitCode=0 Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.986346 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerDied","Data":"d86a52dadf89ca771319f568a6ae8c7613ca0ee04a20b96d6c8c1318daf0bb79"} Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.988738 4841 generic.go:334] "Generic (PLEG): container finished" podID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerID="4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad" exitCode=0 Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.988793 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerDied","Data":"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad"} Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.988817 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxq8g" Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.988839 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxq8g" event={"ID":"ba167f5d-353d-4f7a-aba4-a8571b930170","Type":"ContainerDied","Data":"c87144e561391870a08f11429fd5210b361a5e95b32777a858ef504ad3297ff2"} Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.988859 4841 scope.go:117] "RemoveContainer" containerID="4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad" Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.992493 4841 generic.go:334] "Generic (PLEG): container finished" podID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerID="a237863b27edea9bbfa1bff9b83f419606c0718a3da755a21ae4659b8710ae4d" exitCode=0 Dec 04 09:22:27 crc kubenswrapper[4841]: I1204 09:22:27.992523 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerDied","Data":"a237863b27edea9bbfa1bff9b83f419606c0718a3da755a21ae4659b8710ae4d"} Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.007629 4841 scope.go:117] "RemoveContainer" containerID="e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.013934 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities\") pod \"ba167f5d-353d-4f7a-aba4-a8571b930170\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.014001 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsxl\" (UniqueName: \"kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl\") pod \"ba167f5d-353d-4f7a-aba4-a8571b930170\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.014096 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content\") pod \"ba167f5d-353d-4f7a-aba4-a8571b930170\" (UID: \"ba167f5d-353d-4f7a-aba4-a8571b930170\") " Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.014754 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities" (OuterVolumeSpecName: "utilities") pod "ba167f5d-353d-4f7a-aba4-a8571b930170" (UID: "ba167f5d-353d-4f7a-aba4-a8571b930170"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.023064 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl" (OuterVolumeSpecName: "kube-api-access-tnsxl") pod "ba167f5d-353d-4f7a-aba4-a8571b930170" (UID: "ba167f5d-353d-4f7a-aba4-a8571b930170"). InnerVolumeSpecName "kube-api-access-tnsxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.028536 4841 scope.go:117] "RemoveContainer" containerID="e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.075807 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba167f5d-353d-4f7a-aba4-a8571b930170" (UID: "ba167f5d-353d-4f7a-aba4-a8571b930170"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.083303 4841 scope.go:117] "RemoveContainer" containerID="4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad" Dec 04 09:22:28 crc kubenswrapper[4841]: E1204 09:22:28.083773 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad\": container with ID starting with 4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad not found: ID does not exist" containerID="4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.083819 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad"} err="failed to get container status \"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad\": rpc error: code = NotFound desc = could not find container \"4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad\": container with ID starting with 4d9023bf7c81368ce8c1a2713c0471cea0f46a8192d2df78c20a56916ceee2ad not found: ID does not exist" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.083852 4841 scope.go:117] "RemoveContainer" containerID="e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2" Dec 04 09:22:28 crc kubenswrapper[4841]: E1204 09:22:28.084223 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2\": container with ID starting with e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2 not found: ID does not exist" containerID="e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.084260 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2"} err="failed to get container status \"e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2\": rpc error: code = NotFound desc = could not find container \"e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2\": container with ID starting with e9bf20ab95f58c20a3f9bb80822c7374400af8b97d9ddd7eec01bfb00372ebf2 not found: ID does not exist" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.084289 4841 scope.go:117] "RemoveContainer" containerID="e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe" Dec 04 09:22:28 crc kubenswrapper[4841]: E1204 09:22:28.084543 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe\": container with ID starting with e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe not found: ID does not exist" containerID="e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.084574 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe"} err="failed to get container status \"e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe\": rpc error: code = NotFound desc = could not find container \"e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe\": container with ID starting with e089ce15061040cbcee6d644d055da42e3d5b21929fcf09915dca50f4e5115fe not found: ID does not exist" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.116084 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsxl\" (UniqueName: \"kubernetes.io/projected/ba167f5d-353d-4f7a-aba4-a8571b930170-kube-api-access-tnsxl\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.116142 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.116165 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba167f5d-353d-4f7a-aba4-a8571b930170-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.331472 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:22:28 crc kubenswrapper[4841]: I1204 09:22:28.335882 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxq8g"] Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.231825 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.238070 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.335657 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities\") pod \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.335734 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqgcw\" (UniqueName: \"kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw\") pod \"54eced9a-177d-46f7-b7f5-d388d401ada9\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.335881 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content\") pod \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.335941 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content\") pod \"54eced9a-177d-46f7-b7f5-d388d401ada9\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.335983 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities\") pod \"54eced9a-177d-46f7-b7f5-d388d401ada9\" (UID: \"54eced9a-177d-46f7-b7f5-d388d401ada9\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.336058 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbgbw\" (UniqueName: \"kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw\") pod \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\" (UID: \"53c05603-129b-4fb9-b0b9-b976b8ca5a60\") " Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.337501 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities" (OuterVolumeSpecName: "utilities") pod "53c05603-129b-4fb9-b0b9-b976b8ca5a60" (UID: "53c05603-129b-4fb9-b0b9-b976b8ca5a60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.337574 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities" (OuterVolumeSpecName: "utilities") pod "54eced9a-177d-46f7-b7f5-d388d401ada9" (UID: "54eced9a-177d-46f7-b7f5-d388d401ada9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.338022 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.338062 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.343296 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw" (OuterVolumeSpecName: "kube-api-access-pbgbw") pod "53c05603-129b-4fb9-b0b9-b976b8ca5a60" (UID: "53c05603-129b-4fb9-b0b9-b976b8ca5a60"). InnerVolumeSpecName "kube-api-access-pbgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.343722 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw" (OuterVolumeSpecName: "kube-api-access-dqgcw") pod "54eced9a-177d-46f7-b7f5-d388d401ada9" (UID: "54eced9a-177d-46f7-b7f5-d388d401ada9"). InnerVolumeSpecName "kube-api-access-dqgcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.375286 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53c05603-129b-4fb9-b0b9-b976b8ca5a60" (UID: "53c05603-129b-4fb9-b0b9-b976b8ca5a60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.395564 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54eced9a-177d-46f7-b7f5-d388d401ada9" (UID: "54eced9a-177d-46f7-b7f5-d388d401ada9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.439975 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqgcw\" (UniqueName: \"kubernetes.io/projected/54eced9a-177d-46f7-b7f5-d388d401ada9-kube-api-access-dqgcw\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.440031 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c05603-129b-4fb9-b0b9-b976b8ca5a60-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.440050 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eced9a-177d-46f7-b7f5-d388d401ada9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.440068 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbgbw\" (UniqueName: \"kubernetes.io/projected/53c05603-129b-4fb9-b0b9-b976b8ca5a60-kube-api-access-pbgbw\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:29 crc kubenswrapper[4841]: I1204 09:22:29.627172 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" path="/var/lib/kubelet/pods/ba167f5d-353d-4f7a-aba4-a8571b930170/volumes" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.007411 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk6qx" event={"ID":"54eced9a-177d-46f7-b7f5-d388d401ada9","Type":"ContainerDied","Data":"a2a1e32e45d9687ead79fede10573bb920087af149377601a12f8d3cc909a28a"} Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.007480 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk6qx" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.007487 4841 scope.go:117] "RemoveContainer" containerID="d86a52dadf89ca771319f568a6ae8c7613ca0ee04a20b96d6c8c1318daf0bb79" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.012648 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvfq2" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.012565 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvfq2" event={"ID":"53c05603-129b-4fb9-b0b9-b976b8ca5a60","Type":"ContainerDied","Data":"5f2da1d05d2118bcd2a4d4adeebdee2358b9390db4d19ea0cdd49d498d12a9fa"} Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.034812 4841 scope.go:117] "RemoveContainer" containerID="7decee1316b94023f29c7383560d0b609ab465972eb2836a29ef7a55f1d4e867" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.049160 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.089374 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kk6qx"] Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.094077 4841 scope.go:117] "RemoveContainer" containerID="ba486150b013e05e87baa97b0f220a73e61a83d7c3d57a1645a8af29ac8ffcfa" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.097115 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.100660 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvfq2"] Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.111542 4841 scope.go:117] "RemoveContainer" containerID="a237863b27edea9bbfa1bff9b83f419606c0718a3da755a21ae4659b8710ae4d" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.124040 4841 scope.go:117] "RemoveContainer" containerID="cb548df9d46edbe926be5b63cac648d3b5bda2459b25855a334998ae12bf7b16" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.138552 4841 scope.go:117] "RemoveContainer" containerID="47e48946c64e82f284d8475a58a55574b11655e2b27a2d19c9049a931327e8f6" Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.765202 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:22:30 crc kubenswrapper[4841]: I1204 09:22:30.765424 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpz5g" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="registry-server" containerID="cri-o://9c8673db818aea09f81bd5fae90e3265b27d97f183eb5cfee774358c7c00677e" gracePeriod=2 Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.033412 4841 generic.go:334] "Generic (PLEG): container finished" podID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerID="9c8673db818aea09f81bd5fae90e3265b27d97f183eb5cfee774358c7c00677e" exitCode=0 Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.033918 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerDied","Data":"9c8673db818aea09f81bd5fae90e3265b27d97f183eb5cfee774358c7c00677e"} Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.222607 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.264645 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd2k\" (UniqueName: \"kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k\") pod \"6c654bd7-2e20-4c59-91d8-17f232e97d35\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.264723 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities\") pod \"6c654bd7-2e20-4c59-91d8-17f232e97d35\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.264837 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content\") pod \"6c654bd7-2e20-4c59-91d8-17f232e97d35\" (UID: \"6c654bd7-2e20-4c59-91d8-17f232e97d35\") " Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.268473 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k" (OuterVolumeSpecName: "kube-api-access-dtd2k") pod "6c654bd7-2e20-4c59-91d8-17f232e97d35" (UID: "6c654bd7-2e20-4c59-91d8-17f232e97d35"). InnerVolumeSpecName "kube-api-access-dtd2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.277888 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities" (OuterVolumeSpecName: "utilities") pod "6c654bd7-2e20-4c59-91d8-17f232e97d35" (UID: "6c654bd7-2e20-4c59-91d8-17f232e97d35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.367001 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd2k\" (UniqueName: \"kubernetes.io/projected/6c654bd7-2e20-4c59-91d8-17f232e97d35-kube-api-access-dtd2k\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.367058 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.387382 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c654bd7-2e20-4c59-91d8-17f232e97d35" (UID: "6c654bd7-2e20-4c59-91d8-17f232e97d35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.468019 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c654bd7-2e20-4c59-91d8-17f232e97d35-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.631422 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" path="/var/lib/kubelet/pods/53c05603-129b-4fb9-b0b9-b976b8ca5a60/volumes" Dec 04 09:22:31 crc kubenswrapper[4841]: I1204 09:22:31.632868 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" path="/var/lib/kubelet/pods/54eced9a-177d-46f7-b7f5-d388d401ada9/volumes" Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.044066 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpz5g" event={"ID":"6c654bd7-2e20-4c59-91d8-17f232e97d35","Type":"ContainerDied","Data":"fc8b494ec4f20ef815f87f80f6ceb4be3f974d80d5071de5bae8537425173b0a"} Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.044143 4841 scope.go:117] "RemoveContainer" containerID="9c8673db818aea09f81bd5fae90e3265b27d97f183eb5cfee774358c7c00677e" Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.044159 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpz5g" Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.072794 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.078710 4841 scope.go:117] "RemoveContainer" containerID="4883bb151ef4211b82720859995372f972ce00644ea2605ba325bd3b9758dc3f" Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.082340 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lpz5g"] Dec 04 09:22:32 crc kubenswrapper[4841]: I1204 09:22:32.103276 4841 scope.go:117] "RemoveContainer" containerID="cdea66786ae266df5b1b4399d96ff1a23ce67a21d615dc801ab973f872945dea" Dec 04 09:22:33 crc kubenswrapper[4841]: I1204 09:22:33.626409 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" path="/var/lib/kubelet/pods/6c654bd7-2e20-4c59-91d8-17f232e97d35/volumes" Dec 04 09:22:35 crc kubenswrapper[4841]: I1204 09:22:35.072879 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmlf5"] Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.425414 4841 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426872 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426890 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426907 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426913 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426924 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426933 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426940 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426948 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426956 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426963 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426973 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426979 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.426987 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.426993 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.427001 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427007 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.427016 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427025 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.427033 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427040 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.427046 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427053 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="extract-content" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.427066 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427072 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="extract-utilities" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427199 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="54eced9a-177d-46f7-b7f5-d388d401ada9" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427208 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c654bd7-2e20-4c59-91d8-17f232e97d35" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427218 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c05603-129b-4fb9-b0b9-b976b8ca5a60" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.427227 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba167f5d-353d-4f7a-aba4-a8571b930170" containerName="registry-server" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428259 4841 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428393 4841 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428614 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941" gracePeriod=15 Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428675 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d" gracePeriod=15 Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428748 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d" gracePeriod=15 Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428885 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac" gracePeriod=15 Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429009 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.428786 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2" gracePeriod=15 Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429405 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429451 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429478 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429494 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429520 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429539 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429565 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429581 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429600 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429690 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429718 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429818 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.429850 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.429916 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430402 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430433 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430449 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430509 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430524 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.430585 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.438039 4841 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.490966 4841 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.508685 4841 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.509664 4841 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.510345 4841 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.510826 4841 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.511325 4841 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.511399 4841 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.511960 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="200ms" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.587224 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.587689 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.587743 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.587921 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.588020 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.588094 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.588176 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.588275 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.689645 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.689806 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690057 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690138 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690181 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690207 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690246 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690259 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690281 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690327 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690351 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.690639 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.691061 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.691091 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.691114 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.691138 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.713665 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="400ms" Dec 04 09:22:48 crc kubenswrapper[4841]: I1204 09:22:48.792209 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:48 crc kubenswrapper[4841]: E1204 09:22:48.828332 4841 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187df8c8745156fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 09:22:48.827737852 +0000 UTC m=+235.579528056,LastTimestamp:2025-12-04 09:22:48.827737852 +0000 UTC m=+235.579528056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 09:22:49 crc kubenswrapper[4841]: E1204 09:22:49.114678 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="800ms" Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.139826 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.142405 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.144089 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d" exitCode=0 Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.144133 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d" exitCode=0 Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.144153 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2" exitCode=0 Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.144168 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac" exitCode=2 Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.144205 4841 scope.go:117] "RemoveContainer" containerID="7d81d5742694f00fc1a9dd4e1a19e103989893055c5900dd9b206ca0d5ef2e87" Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.148926 4841 generic.go:334] "Generic (PLEG): container finished" podID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" containerID="5edc26cb7a7481ec0df0a7ed8702f8b174b62338ce6b859ecb276288707e1e3f" exitCode=0 Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.149105 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f8de332-17f0-4c3e-b1d6-419567acdb7d","Type":"ContainerDied","Data":"5edc26cb7a7481ec0df0a7ed8702f8b174b62338ce6b859ecb276288707e1e3f"} Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.150210 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.152094 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577"} Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.152183 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8cfcc4ffc551d940acc7200afa58fa9741cdd6937bb90ba61a6da7bea6a93998"} Dec 04 09:22:49 crc kubenswrapper[4841]: I1204 09:22:49.153158 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:49 crc kubenswrapper[4841]: E1204 09:22:49.153459 4841 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:22:49 crc kubenswrapper[4841]: E1204 09:22:49.916736 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="1.6s" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.164297 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.513200 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.514240 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access\") pod \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.514303 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock\") pod \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.514327 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir\") pod \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\" (UID: \"7f8de332-17f0-4c3e-b1d6-419567acdb7d\") " Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.514592 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f8de332-17f0-4c3e-b1d6-419567acdb7d" (UID: "7f8de332-17f0-4c3e-b1d6-419567acdb7d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.515353 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f8de332-17f0-4c3e-b1d6-419567acdb7d" (UID: "7f8de332-17f0-4c3e-b1d6-419567acdb7d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.519675 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.542183 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f8de332-17f0-4c3e-b1d6-419567acdb7d" (UID: "7f8de332-17f0-4c3e-b1d6-419567acdb7d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.615941 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.615983 4841 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:50 crc kubenswrapper[4841]: I1204 09:22:50.615999 4841 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8de332-17f0-4c3e-b1d6-419567acdb7d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.175254 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.176190 4841 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941" exitCode=0 Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.178068 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f8de332-17f0-4c3e-b1d6-419567acdb7d","Type":"ContainerDied","Data":"05c0fe5d96c02b129d118bb327604f9b5c66e645c12e003ec074a844ecb719a7"} Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.178131 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05c0fe5d96c02b129d118bb327604f9b5c66e645c12e003ec074a844ecb719a7" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.178193 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.205366 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.344141 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.345249 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.345977 4841 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.346339 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426182 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426356 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426387 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426725 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426726 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.426719 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:22:51 crc kubenswrapper[4841]: E1204 09:22:51.517888 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="3.2s" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.527509 4841 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.527546 4841 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.527558 4841 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:22:51 crc kubenswrapper[4841]: I1204 09:22:51.627670 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.188093 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.188921 4841 scope.go:117] "RemoveContainer" containerID="b340b66624b7b78390a24fee0e8d9f89528d43b7bbc3354ab2353e53d621346d" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.189054 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.189696 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.190085 4841 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.192160 4841 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.192738 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.210103 4841 scope.go:117] "RemoveContainer" containerID="b7f5b861b3471e027732dc6392774a2c576ee61e2ee236eb35361eaeb677a42d" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.223882 4841 scope.go:117] "RemoveContainer" containerID="964f3a0d361db152bd7efa8978c4643b6ecddec68988c7c09c57fee49dfca6b2" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.238659 4841 scope.go:117] "RemoveContainer" containerID="628abf2984a0b3ab43ced8c51d35a69275fbaa231ec375541561b4a58f9906ac" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.254616 4841 scope.go:117] "RemoveContainer" containerID="d6f02cbbaecf2d436ce82a6c588e9e5d068b51d71072700cf7732903cb32a941" Dec 04 09:22:52 crc kubenswrapper[4841]: I1204 09:22:52.268710 4841 scope.go:117] "RemoveContainer" containerID="c64d6a55dd02412f9cdf5b4cb1c60cfd161b89f2b1f819a2d834fb007216d01b" Dec 04 09:22:53 crc kubenswrapper[4841]: I1204 09:22:53.619128 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:53 crc kubenswrapper[4841]: I1204 09:22:53.620603 4841 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:22:54 crc kubenswrapper[4841]: E1204 09:22:54.236234 4841 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187df8c8745156fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 09:22:48.827737852 +0000 UTC m=+235.579528056,LastTimestamp:2025-12-04 09:22:48.827737852 +0000 UTC m=+235.579528056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 09:22:54 crc kubenswrapper[4841]: E1204 09:22:54.719230 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="6.4s" Dec 04 09:23:00 crc kubenswrapper[4841]: I1204 09:23:00.094934 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" containerName="oauth-openshift" containerID="cri-o://49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f" gracePeriod=15 Dec 04 09:23:00 crc kubenswrapper[4841]: I1204 09:23:00.989789 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:23:00 crc kubenswrapper[4841]: I1204 09:23:00.990440 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:00 crc kubenswrapper[4841]: I1204 09:23:00.990820 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073095 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073170 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073223 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073230 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073257 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073355 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073421 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073449 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073479 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073504 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073532 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073552 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073574 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073597 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.073639 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h4w7\" (UniqueName: \"kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7\") pod \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\" (UID: \"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe\") " Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.074064 4841 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.074280 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.074321 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.074794 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.076495 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.080517 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.080742 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.081123 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.081141 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7" (OuterVolumeSpecName: "kube-api-access-8h4w7") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "kube-api-access-8h4w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.082512 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.082738 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.083069 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.083675 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.086991 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" (UID: "ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:23:01 crc kubenswrapper[4841]: E1204 09:23:01.119972 4841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="7s" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175464 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175517 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175529 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175543 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175553 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175599 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175610 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175624 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175637 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175648 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175663 4841 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175673 4841 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.175692 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h4w7\" (UniqueName: \"kubernetes.io/projected/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe-kube-api-access-8h4w7\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.246306 4841 generic.go:334] "Generic (PLEG): container finished" podID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" containerID="49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f" exitCode=0 Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.246370 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" event={"ID":"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe","Type":"ContainerDied","Data":"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f"} Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.246381 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.246416 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" event={"ID":"ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe","Type":"ContainerDied","Data":"2ba13c878470cb0999fd86965e3853eaf0f8c8535e698565eb7ffc4197807719"} Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.246444 4841 scope.go:117] "RemoveContainer" containerID="49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.247200 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.247694 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.267414 4841 scope.go:117] "RemoveContainer" containerID="49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f" Dec 04 09:23:01 crc kubenswrapper[4841]: E1204 09:23:01.268048 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f\": container with ID starting with 49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f not found: ID does not exist" containerID="49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.268140 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f"} err="failed to get container status \"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f\": rpc error: code = NotFound desc = could not find container \"49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f\": container with ID starting with 49041d352fcc92636c8af120f1e7c2555f7d83eee84b05fcedb0bb711694018f not found: ID does not exist" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.273338 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:01 crc kubenswrapper[4841]: I1204 09:23:01.273991 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.616034 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.617239 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.617740 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.629795 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.629840 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:02 crc kubenswrapper[4841]: E1204 09:23:02.630427 4841 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:02 crc kubenswrapper[4841]: I1204 09:23:02.631207 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:02 crc kubenswrapper[4841]: W1204 09:23:02.651951 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-68c95f2fa26fb39db96f8afe590a467262f3702f6be12933ec967f6e2413b536 WatchSource:0}: Error finding container 68c95f2fa26fb39db96f8afe590a467262f3702f6be12933ec967f6e2413b536: Status 404 returned error can't find the container with id 68c95f2fa26fb39db96f8afe590a467262f3702f6be12933ec967f6e2413b536 Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.263837 4841 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9af8b814ef9183214d6e56c47a4ec5876d23d6244cc7b55f1ef68d8b5aed99b3" exitCode=0 Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.263978 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9af8b814ef9183214d6e56c47a4ec5876d23d6244cc7b55f1ef68d8b5aed99b3"} Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.264323 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68c95f2fa26fb39db96f8afe590a467262f3702f6be12933ec967f6e2413b536"} Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.264864 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.264910 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.265607 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: E1204 09:23:03.265699 4841 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.266088 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.269025 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.269104 4841 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9" exitCode=1 Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.269147 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9"} Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.269803 4841 scope.go:117] "RemoveContainer" containerID="ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.270011 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.270549 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.271105 4841 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.625743 4841 status_manager.go:851] "Failed to get status for pod" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.626693 4841 status_manager.go:851] "Failed to get status for pod" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" pod="openshift-authentication/oauth-openshift-558db77b4-qmlf5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmlf5\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.627284 4841 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:03 crc kubenswrapper[4841]: I1204 09:23:03.627817 4841 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.278678 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.278782 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"697e290b529360924423399c95328e70bd69980a75744da90664213ffb62587e"} Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.283224 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8df4d697978115b302774a7de68136ffa006ab1d27917b2b4ce09cb2c04b1718"} Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.283264 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"546fbab4208d908a79498e3a0b66f534d111b7f5ba6c3e196a1ffa023cc881e1"} Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.283281 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45b40dd785a52d01e045d38d677074702585d3c7c0ca599e14b42f63b41c1e65"} Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.463005 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.463197 4841 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 04 09:23:04 crc kubenswrapper[4841]: I1204 09:23:04.463252 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 04 09:23:05 crc kubenswrapper[4841]: I1204 09:23:05.291114 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84436e9776d9a4f4b51d5e43919dbade385999d8d80a4eda91ab2345e416ebc9"} Dec 04 09:23:05 crc kubenswrapper[4841]: I1204 09:23:05.291546 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5034bd17710741bcf182bfdddabf07337981f983216242b43aa749519add479"} Dec 04 09:23:05 crc kubenswrapper[4841]: I1204 09:23:05.291347 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:05 crc kubenswrapper[4841]: I1204 09:23:05.291585 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:07 crc kubenswrapper[4841]: I1204 09:23:07.631670 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:07 crc kubenswrapper[4841]: I1204 09:23:07.632117 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:07 crc kubenswrapper[4841]: I1204 09:23:07.637119 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:09 crc kubenswrapper[4841]: I1204 09:23:09.190511 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.298464 4841 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.319380 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.319543 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.319584 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.323664 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:10 crc kubenswrapper[4841]: I1204 09:23:10.325754 4841 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ecf8c529-ca7b-42a6-8f26-f1e4404ed3d4" Dec 04 09:23:11 crc kubenswrapper[4841]: I1204 09:23:11.323390 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:11 crc kubenswrapper[4841]: I1204 09:23:11.323424 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:12 crc kubenswrapper[4841]: I1204 09:23:12.329162 4841 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:12 crc kubenswrapper[4841]: I1204 09:23:12.329489 4841 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ccd5d301-250e-4a2f-96c6-58cb258cb360" Dec 04 09:23:13 crc kubenswrapper[4841]: I1204 09:23:13.633501 4841 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ecf8c529-ca7b-42a6-8f26-f1e4404ed3d4" Dec 04 09:23:14 crc kubenswrapper[4841]: I1204 09:23:14.462720 4841 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 04 09:23:14 crc kubenswrapper[4841]: I1204 09:23:14.463170 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 04 09:23:20 crc kubenswrapper[4841]: I1204 09:23:20.012415 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 09:23:20 crc kubenswrapper[4841]: I1204 09:23:20.602639 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 09:23:20 crc kubenswrapper[4841]: I1204 09:23:20.665644 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 09:23:20 crc kubenswrapper[4841]: I1204 09:23:20.961593 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 09:23:21 crc kubenswrapper[4841]: I1204 09:23:21.018366 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 09:23:21 crc kubenswrapper[4841]: I1204 09:23:21.234941 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 09:23:21 crc kubenswrapper[4841]: I1204 09:23:21.470811 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.143061 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.198613 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.340476 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.355320 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.534023 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.549569 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.620957 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.647292 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.658814 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.664875 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.671197 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.721533 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.769791 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.852257 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.980720 4841 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 09:23:22 crc kubenswrapper[4841]: I1204 09:23:22.986916 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.021464 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.051885 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.229548 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.313103 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.337803 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.390672 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.471290 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.478354 4841 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.563655 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.595732 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.694290 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.758684 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.812989 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.899341 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.965272 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 09:23:23 crc kubenswrapper[4841]: I1204 09:23:23.980175 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.000276 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.025162 4841 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.030542 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.258197 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.287265 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.320859 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.363143 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.391870 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.463236 4841 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.463296 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.463355 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.464107 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"697e290b529360924423399c95328e70bd69980a75744da90664213ffb62587e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.464269 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://697e290b529360924423399c95328e70bd69980a75744da90664213ffb62587e" gracePeriod=30 Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.501556 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.504051 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.509184 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.543296 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.598652 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.624703 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.649667 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.704076 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.911585 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 09:23:24 crc kubenswrapper[4841]: I1204 09:23:24.974452 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.017954 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.025023 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.153171 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.195675 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.280504 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.522795 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.549423 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.574245 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.624899 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.633056 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.682499 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.845755 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 09:23:25 crc kubenswrapper[4841]: I1204 09:23:25.852993 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.012748 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.014509 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.079906 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.129579 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.202088 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.346966 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.417236 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.497399 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.499710 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.518351 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.594846 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.641835 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.676554 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.691030 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.740326 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.745371 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.774536 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.813612 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.877208 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.895891 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 09:23:26 crc kubenswrapper[4841]: I1204 09:23:26.940066 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.032664 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.043394 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.044103 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.044146 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.302905 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.406191 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.415874 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.463229 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.470188 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.561527 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.748889 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.757984 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.948531 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.977094 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 09:23:27 crc kubenswrapper[4841]: I1204 09:23:27.994009 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.031121 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.166275 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.220817 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.234557 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.327444 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.418923 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.544554 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.741749 4841 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.756840 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.837135 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.840906 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.852956 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.967055 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.986465 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 09:23:28 crc kubenswrapper[4841]: I1204 09:23:28.997455 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.036372 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.123861 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.146391 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.185748 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.218163 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.304496 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.495409 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.579338 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.647743 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.679311 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.717593 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.859620 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.934008 4841 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 09:23:29 crc kubenswrapper[4841]: I1204 09:23:29.970113 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.003156 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.010397 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.021558 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.131584 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.196519 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.319730 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.337101 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.345870 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.350245 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.354131 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.495561 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.550657 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.579315 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.592250 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.592817 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.603731 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.681166 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.719018 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.735735 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.778409 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.853921 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.899385 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 09:23:30 crc kubenswrapper[4841]: I1204 09:23:30.977077 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.014669 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.068657 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.118067 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.176232 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.304413 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.366484 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.370444 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.378912 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.413888 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.531684 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.544810 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.584859 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.689777 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.698343 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.723638 4841 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.728565 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-qmlf5"] Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.728628 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.733656 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.750004 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.749987328 podStartE2EDuration="21.749987328s" podCreationTimestamp="2025-12-04 09:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:23:31.746631269 +0000 UTC m=+278.498421503" watchObservedRunningTime="2025-12-04 09:23:31.749987328 +0000 UTC m=+278.501777542" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.760065 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.840757 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.855656 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.896204 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.945381 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 09:23:31 crc kubenswrapper[4841]: I1204 09:23:31.960485 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.050889 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.092502 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.133721 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.162341 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.243834 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.349731 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.395175 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.462868 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.552495 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.629815 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.647980 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.685154 4841 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.685432 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577" gracePeriod=5 Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.716504 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.744981 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.827217 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.851992 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.869608 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.877876 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.925674 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 09:23:32 crc kubenswrapper[4841]: I1204 09:23:32.982576 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.000568 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.142201 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.192647 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.221538 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.254729 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.272185 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.295373 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.352962 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.362097 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.499273 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.620807 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.623908 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" path="/var/lib/kubelet/pods/ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe/volumes" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.661613 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.700078 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.823941 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.861733 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.885806 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.887407 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 09:23:33 crc kubenswrapper[4841]: I1204 09:23:33.985581 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.013076 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.071769 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.112605 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.189496 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.305274 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.367986 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.565229 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 09:23:34 crc kubenswrapper[4841]: I1204 09:23:34.930231 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.059329 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.183664 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.314724 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.401709 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f5cd6b797-25mpj"] Dec 04 09:23:35 crc kubenswrapper[4841]: E1204 09:23:35.401984 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" containerName="installer" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402001 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" containerName="installer" Dec 04 09:23:35 crc kubenswrapper[4841]: E1204 09:23:35.402012 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" containerName="oauth-openshift" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402020 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" containerName="oauth-openshift" Dec 04 09:23:35 crc kubenswrapper[4841]: E1204 09:23:35.402040 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402049 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402157 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402170 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8de332-17f0-4c3e-b1d6-419567acdb7d" containerName="installer" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402187 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8e7a5f-d403-4417-b9c3-aac9c0bd43fe" containerName="oauth-openshift" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.402608 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.406656 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.406837 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.406971 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.408270 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.409017 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.409019 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.409123 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.409381 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.410583 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.411398 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.411732 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.412664 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.418277 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.424357 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f5cd6b797-25mpj"] Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.432628 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.435338 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.439127 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.516995 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517049 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517095 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-policies\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517140 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8n7v\" (UniqueName: \"kubernetes.io/projected/ab666d67-1ae7-4ed1-a513-86fb4364b602-kube-api-access-p8n7v\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517165 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517223 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-dir\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517245 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517279 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-router-certs\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517303 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-login\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517329 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-error\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517353 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-service-ca\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517452 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517499 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.517533 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-session\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.547704 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.618880 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-dir\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.618931 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.618973 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-router-certs\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619000 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-login\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619022 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-error\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619043 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-service-ca\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619084 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619107 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-session\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619128 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619156 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619181 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619176 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-dir\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619216 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-policies\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619299 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8n7v\" (UniqueName: \"kubernetes.io/projected/ab666d67-1ae7-4ed1-a513-86fb4364b602-kube-api-access-p8n7v\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.619642 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.620034 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-audit-policies\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.620544 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-service-ca\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.621267 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.621666 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.629805 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-login\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.631981 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.632223 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-template-error\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.632871 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.633155 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.633343 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-router-certs\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.640383 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.654633 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab666d67-1ae7-4ed1-a513-86fb4364b602-v4-0-config-system-session\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.655812 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.668623 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8n7v\" (UniqueName: \"kubernetes.io/projected/ab666d67-1ae7-4ed1-a513-86fb4364b602-kube-api-access-p8n7v\") pod \"oauth-openshift-f5cd6b797-25mpj\" (UID: \"ab666d67-1ae7-4ed1-a513-86fb4364b602\") " pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.736953 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.878882 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.969037 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f5cd6b797-25mpj"] Dec 04 09:23:35 crc kubenswrapper[4841]: I1204 09:23:35.993073 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.466942 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" event={"ID":"ab666d67-1ae7-4ed1-a513-86fb4364b602","Type":"ContainerStarted","Data":"9cbf89bec5834090d507d9b3e456ea777fb53357ef51da1df3858f3dcc8a7abd"} Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.467555 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" event={"ID":"ab666d67-1ae7-4ed1-a513-86fb4364b602","Type":"ContainerStarted","Data":"ff2f492a91295ebf4c8ad42e0b4239d1a98eb0c222111984529eece33cca78f0"} Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.467655 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.590188 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.760969 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.778870 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.810940 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f5cd6b797-25mpj" podStartSLOduration=61.81091797 podStartE2EDuration="1m1.81091797s" podCreationTimestamp="2025-12-04 09:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:23:36.499426477 +0000 UTC m=+283.251216711" watchObservedRunningTime="2025-12-04 09:23:36.81091797 +0000 UTC m=+283.562708204" Dec 04 09:23:36 crc kubenswrapper[4841]: I1204 09:23:36.888281 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 09:23:37 crc kubenswrapper[4841]: I1204 09:23:37.701350 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.283838 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.283956 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.457203 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.457353 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.457404 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.457504 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.457578 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.458008 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.458075 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.458138 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.458158 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.470500 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.482505 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.482618 4841 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577" exitCode=137 Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.482719 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.482740 4841 scope.go:117] "RemoveContainer" containerID="f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.541809 4841 scope.go:117] "RemoveContainer" containerID="f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577" Dec 04 09:23:38 crc kubenswrapper[4841]: E1204 09:23:38.542645 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577\": container with ID starting with f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577 not found: ID does not exist" containerID="f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.542690 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577"} err="failed to get container status \"f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577\": rpc error: code = NotFound desc = could not find container \"f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577\": container with ID starting with f05355fdc5f42bd6296ad12ae37f007702c9fc8b534d41f8634890b7110a1577 not found: ID does not exist" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.558839 4841 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.558888 4841 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.558922 4841 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.558948 4841 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:38 crc kubenswrapper[4841]: I1204 09:23:38.558973 4841 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 09:23:39 crc kubenswrapper[4841]: I1204 09:23:39.623733 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 04 09:23:52 crc kubenswrapper[4841]: I1204 09:23:52.579610 4841 generic.go:334] "Generic (PLEG): container finished" podID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerID="f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe" exitCode=0 Dec 04 09:23:52 crc kubenswrapper[4841]: I1204 09:23:52.579787 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerDied","Data":"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe"} Dec 04 09:23:52 crc kubenswrapper[4841]: I1204 09:23:52.580365 4841 scope.go:117] "RemoveContainer" containerID="f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe" Dec 04 09:23:53 crc kubenswrapper[4841]: I1204 09:23:53.589282 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerStarted","Data":"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146"} Dec 04 09:23:53 crc kubenswrapper[4841]: I1204 09:23:53.590243 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:23:53 crc kubenswrapper[4841]: I1204 09:23:53.592601 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:23:53 crc kubenswrapper[4841]: I1204 09:23:53.864525 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 09:23:54 crc kubenswrapper[4841]: I1204 09:23:54.595798 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 04 09:23:54 crc kubenswrapper[4841]: I1204 09:23:54.597065 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 09:23:54 crc kubenswrapper[4841]: I1204 09:23:54.597090 4841 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="697e290b529360924423399c95328e70bd69980a75744da90664213ffb62587e" exitCode=137 Dec 04 09:23:54 crc kubenswrapper[4841]: I1204 09:23:54.597710 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"697e290b529360924423399c95328e70bd69980a75744da90664213ffb62587e"} Dec 04 09:23:54 crc kubenswrapper[4841]: I1204 09:23:54.597834 4841 scope.go:117] "RemoveContainer" containerID="ca4ec61f50edb34d3af2b942ae35d5edf4f46d310543c26027955b84e530f8d9" Dec 04 09:23:55 crc kubenswrapper[4841]: I1204 09:23:55.605932 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 04 09:23:55 crc kubenswrapper[4841]: I1204 09:23:55.607661 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"faa948aec8d14e49c5348d70510c0d182c2ddc422bac588ab9bd66765b338b62"} Dec 04 09:23:59 crc kubenswrapper[4841]: I1204 09:23:59.191202 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:24:04 crc kubenswrapper[4841]: I1204 09:24:04.462573 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:24:04 crc kubenswrapper[4841]: I1204 09:24:04.469226 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:24:04 crc kubenswrapper[4841]: I1204 09:24:04.665312 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.347278 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.347841 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerName="controller-manager" containerID="cri-o://69d044a96610f3c43839a20c1aee8645e7806ab64eaedaf5b94e7a804dc84779" gracePeriod=30 Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.354668 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.354895 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" podUID="fb45e445-79c2-407e-bc3d-630465ec46ae" containerName="route-controller-manager" containerID="cri-o://6edc1b232a23d00ed1b81fd25b108e30b9f631206a12609bb5e50d10d94f58e6" gracePeriod=30 Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.723350 4841 generic.go:334] "Generic (PLEG): container finished" podID="fb45e445-79c2-407e-bc3d-630465ec46ae" containerID="6edc1b232a23d00ed1b81fd25b108e30b9f631206a12609bb5e50d10d94f58e6" exitCode=0 Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.723530 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" event={"ID":"fb45e445-79c2-407e-bc3d-630465ec46ae","Type":"ContainerDied","Data":"6edc1b232a23d00ed1b81fd25b108e30b9f631206a12609bb5e50d10d94f58e6"} Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.724748 4841 generic.go:334] "Generic (PLEG): container finished" podID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerID="69d044a96610f3c43839a20c1aee8645e7806ab64eaedaf5b94e7a804dc84779" exitCode=0 Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.724779 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" event={"ID":"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13","Type":"ContainerDied","Data":"69d044a96610f3c43839a20c1aee8645e7806ab64eaedaf5b94e7a804dc84779"} Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.845443 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.892600 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.958594 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca\") pod \"fb45e445-79c2-407e-bc3d-630465ec46ae\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.958631 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config\") pod \"fb45e445-79c2-407e-bc3d-630465ec46ae\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.958698 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert\") pod \"fb45e445-79c2-407e-bc3d-630465ec46ae\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.958735 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgxzj\" (UniqueName: \"kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj\") pod \"fb45e445-79c2-407e-bc3d-630465ec46ae\" (UID: \"fb45e445-79c2-407e-bc3d-630465ec46ae\") " Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.959372 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb45e445-79c2-407e-bc3d-630465ec46ae" (UID: "fb45e445-79c2-407e-bc3d-630465ec46ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.960057 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config" (OuterVolumeSpecName: "config") pod "fb45e445-79c2-407e-bc3d-630465ec46ae" (UID: "fb45e445-79c2-407e-bc3d-630465ec46ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.964467 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj" (OuterVolumeSpecName: "kube-api-access-cgxzj") pod "fb45e445-79c2-407e-bc3d-630465ec46ae" (UID: "fb45e445-79c2-407e-bc3d-630465ec46ae"). InnerVolumeSpecName "kube-api-access-cgxzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:12 crc kubenswrapper[4841]: I1204 09:24:12.964492 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb45e445-79c2-407e-bc3d-630465ec46ae" (UID: "fb45e445-79c2-407e-bc3d-630465ec46ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059504 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert\") pod \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059593 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles\") pod \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059619 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca\") pod \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059666 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config\") pod \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059698 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6p6f\" (UniqueName: \"kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f\") pod \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\" (UID: \"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13\") " Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059879 4841 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059891 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb45e445-79c2-407e-bc3d-630465ec46ae-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059899 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb45e445-79c2-407e-bc3d-630465ec46ae-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.059907 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgxzj\" (UniqueName: \"kubernetes.io/projected/fb45e445-79c2-407e-bc3d-630465ec46ae-kube-api-access-cgxzj\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.060451 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca" (OuterVolumeSpecName: "client-ca") pod "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" (UID: "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.060603 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" (UID: "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.060847 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config" (OuterVolumeSpecName: "config") pod "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" (UID: "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.065179 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f" (OuterVolumeSpecName: "kube-api-access-m6p6f") pod "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" (UID: "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13"). InnerVolumeSpecName "kube-api-access-m6p6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.065466 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" (UID: "35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.161086 4841 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.161596 4841 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.161663 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.161727 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6p6f\" (UniqueName: \"kubernetes.io/projected/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-kube-api-access-m6p6f\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.161817 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.543044 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:13 crc kubenswrapper[4841]: E1204 09:24:13.544179 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerName="controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.544250 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerName="controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: E1204 09:24:13.544334 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb45e445-79c2-407e-bc3d-630465ec46ae" containerName="route-controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.544388 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb45e445-79c2-407e-bc3d-630465ec46ae" containerName="route-controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.544534 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb45e445-79c2-407e-bc3d-630465ec46ae" containerName="route-controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.544603 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" containerName="controller-manager" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.545096 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.547984 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f8c5f4747-f69v5"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.548718 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.554034 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c5f4747-f69v5"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.559932 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667612 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbwq\" (UniqueName: \"kubernetes.io/projected/f4141cca-06d7-4e2d-9970-c059d231e6ca-kube-api-access-ljbwq\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667691 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667730 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667797 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-proxy-ca-bundles\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667836 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-client-ca\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667865 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4141cca-06d7-4e2d-9970-c059d231e6ca-serving-cert\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667892 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667928 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-config\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.667968 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc6p\" (UniqueName: \"kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.732489 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" event={"ID":"fb45e445-79c2-407e-bc3d-630465ec46ae","Type":"ContainerDied","Data":"cc182ae347d0e9b040f45900291ba09ea13017521b6d96f9e9e2f4d90d513394"} Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.732508 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.732568 4841 scope.go:117] "RemoveContainer" containerID="6edc1b232a23d00ed1b81fd25b108e30b9f631206a12609bb5e50d10d94f58e6" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.733901 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" event={"ID":"35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13","Type":"ContainerDied","Data":"3e9daca449d7b0def04dc266a4d228265c2913d33d7bcb58475b063388eef7ed"} Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.733995 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9mhm" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.749992 4841 scope.go:117] "RemoveContainer" containerID="69d044a96610f3c43839a20c1aee8645e7806ab64eaedaf5b94e7a804dc84779" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.751673 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.755462 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2c4xn"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.765443 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768522 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-client-ca\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768555 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4141cca-06d7-4e2d-9970-c059d231e6ca-serving-cert\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768578 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768604 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-config\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768630 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc6p\" (UniqueName: \"kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768659 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbwq\" (UniqueName: \"kubernetes.io/projected/f4141cca-06d7-4e2d-9970-c059d231e6ca-kube-api-access-ljbwq\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768681 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768697 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.768714 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-proxy-ca-bundles\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.769398 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-client-ca\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.769748 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.769949 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-proxy-ca-bundles\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.769969 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9mhm"] Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.770076 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.770803 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4141cca-06d7-4e2d-9970-c059d231e6ca-config\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.774305 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.774865 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4141cca-06d7-4e2d-9970-c059d231e6ca-serving-cert\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.787963 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbwq\" (UniqueName: \"kubernetes.io/projected/f4141cca-06d7-4e2d-9970-c059d231e6ca-kube-api-access-ljbwq\") pod \"controller-manager-f8c5f4747-f69v5\" (UID: \"f4141cca-06d7-4e2d-9970-c059d231e6ca\") " pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.792116 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc6p\" (UniqueName: \"kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p\") pod \"route-controller-manager-6f4c4ddfdb-rs5s9\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.872193 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:13 crc kubenswrapper[4841]: I1204 09:24:13.883060 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.115640 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c5f4747-f69v5"] Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.304022 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:14 crc kubenswrapper[4841]: W1204 09:24:14.309674 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88461373_a228_429d_8b73_b96c7dbca2ef.slice/crio-b48c52bfb0c10bc81ff690af99d6df03d25a1b72851cdee15102df7479aca8b9 WatchSource:0}: Error finding container b48c52bfb0c10bc81ff690af99d6df03d25a1b72851cdee15102df7479aca8b9: Status 404 returned error can't find the container with id b48c52bfb0c10bc81ff690af99d6df03d25a1b72851cdee15102df7479aca8b9 Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.740165 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" event={"ID":"88461373-a228-429d-8b73-b96c7dbca2ef","Type":"ContainerStarted","Data":"75e8933ef01af65fde64f6cf5782a95ae80dd9ae9f7501f31bf87ee1f29a15f9"} Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.740799 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" event={"ID":"88461373-a228-429d-8b73-b96c7dbca2ef","Type":"ContainerStarted","Data":"b48c52bfb0c10bc81ff690af99d6df03d25a1b72851cdee15102df7479aca8b9"} Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.740822 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.743497 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" event={"ID":"f4141cca-06d7-4e2d-9970-c059d231e6ca","Type":"ContainerStarted","Data":"931cbb95601ac61e2fced6ec47280defe045109756b450bac9913a9ac0335021"} Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.743531 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" event={"ID":"f4141cca-06d7-4e2d-9970-c059d231e6ca","Type":"ContainerStarted","Data":"e68a5c6c02e4568a16bfee4f7b31fd9cb64d0a9201b5eb18ab32a8bc7051b97e"} Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.743737 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.759901 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" podStartSLOduration=2.759883748 podStartE2EDuration="2.759883748s" podCreationTimestamp="2025-12-04 09:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:24:14.758016178 +0000 UTC m=+321.509806382" watchObservedRunningTime="2025-12-04 09:24:14.759883748 +0000 UTC m=+321.511673952" Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.769131 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" Dec 04 09:24:14 crc kubenswrapper[4841]: I1204 09:24:14.780424 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f8c5f4747-f69v5" podStartSLOduration=2.780408914 podStartE2EDuration="2.780408914s" podCreationTimestamp="2025-12-04 09:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:24:14.778530954 +0000 UTC m=+321.530321158" watchObservedRunningTime="2025-12-04 09:24:14.780408914 +0000 UTC m=+321.532199118" Dec 04 09:24:15 crc kubenswrapper[4841]: I1204 09:24:15.088544 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:15 crc kubenswrapper[4841]: I1204 09:24:15.622964 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13" path="/var/lib/kubelet/pods/35cca6f7-edd3-4d7e-8a34-cf30fe0f7e13/volumes" Dec 04 09:24:15 crc kubenswrapper[4841]: I1204 09:24:15.623807 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb45e445-79c2-407e-bc3d-630465ec46ae" path="/var/lib/kubelet/pods/fb45e445-79c2-407e-bc3d-630465ec46ae/volumes" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.092185 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.093486 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mlvxz" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="registry-server" containerID="cri-o://49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.093641 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.094172 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n8h7m" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="registry-server" containerID="cri-o://74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.104026 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.104297 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" containerID="cri-o://a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.120197 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.120424 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4lmrm" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="registry-server" containerID="cri-o://81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.128534 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kk8z"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.129236 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.132653 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.133003 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vw6k8" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="registry-server" containerID="cri-o://3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.147472 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kk8z"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.220227 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npqsw\" (UniqueName: \"kubernetes.io/projected/d3605f15-1f3c-4177-9401-1cb41d6d417b-kube-api-access-npqsw\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.220273 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.220293 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.287925 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.288341 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" podUID="88461373-a228-429d-8b73-b96c7dbca2ef" containerName="route-controller-manager" containerID="cri-o://75e8933ef01af65fde64f6cf5782a95ae80dd9ae9f7501f31bf87ee1f29a15f9" gracePeriod=30 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.322828 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npqsw\" (UniqueName: \"kubernetes.io/projected/d3605f15-1f3c-4177-9401-1cb41d6d417b-kube-api-access-npqsw\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.323513 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.323536 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.324690 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.342720 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3605f15-1f3c-4177-9401-1cb41d6d417b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.344237 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npqsw\" (UniqueName: \"kubernetes.io/projected/d3605f15-1f3c-4177-9401-1cb41d6d417b-kube-api-access-npqsw\") pod \"marketplace-operator-79b997595-6kk8z\" (UID: \"d3605f15-1f3c-4177-9401-1cb41d6d417b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.450067 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.707737 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.795706 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.819408 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.835056 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content\") pod \"24965104-a4c2-41bc-90af-19b331f214f0\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.835141 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbkmz\" (UniqueName: \"kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz\") pod \"24965104-a4c2-41bc-90af-19b331f214f0\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.835174 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities\") pod \"24965104-a4c2-41bc-90af-19b331f214f0\" (UID: \"24965104-a4c2-41bc-90af-19b331f214f0\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.836129 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities" (OuterVolumeSpecName: "utilities") pod "24965104-a4c2-41bc-90af-19b331f214f0" (UID: "24965104-a4c2-41bc-90af-19b331f214f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.841954 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz" (OuterVolumeSpecName: "kube-api-access-rbkmz") pod "24965104-a4c2-41bc-90af-19b331f214f0" (UID: "24965104-a4c2-41bc-90af-19b331f214f0"). InnerVolumeSpecName "kube-api-access-rbkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.869918 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882061 4841 generic.go:334] "Generic (PLEG): container finished" podID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerID="3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882108 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerDied","Data":"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882131 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw6k8" event={"ID":"6275695a-0b4a-4e12-affd-bfabdffcf529","Type":"ContainerDied","Data":"237395d05f8760e213c63f4f81d6ccf5aa73620f966713efd046faa466721047"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882147 4841 scope.go:117] "RemoveContainer" containerID="3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882249 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw6k8" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.882534 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.883660 4841 generic.go:334] "Generic (PLEG): container finished" podID="88461373-a228-429d-8b73-b96c7dbca2ef" containerID="75e8933ef01af65fde64f6cf5782a95ae80dd9ae9f7501f31bf87ee1f29a15f9" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.883701 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" event={"ID":"88461373-a228-429d-8b73-b96c7dbca2ef","Type":"ContainerDied","Data":"75e8933ef01af65fde64f6cf5782a95ae80dd9ae9f7501f31bf87ee1f29a15f9"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.898106 4841 generic.go:334] "Generic (PLEG): container finished" podID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerID="49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.898159 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerDied","Data":"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.898184 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mlvxz" event={"ID":"9edf6830-72af-441c-b5ff-c9b65706dcc0","Type":"ContainerDied","Data":"79d479a63e7a3cc8be132ca68f322259ae5a5e57263bcf88df7d3481909676fa"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.898244 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mlvxz" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.901478 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24965104-a4c2-41bc-90af-19b331f214f0" (UID: "24965104-a4c2-41bc-90af-19b331f214f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.904486 4841 scope.go:117] "RemoveContainer" containerID="253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.911530 4841 generic.go:334] "Generic (PLEG): container finished" podID="24965104-a4c2-41bc-90af-19b331f214f0" containerID="74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.911594 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerDied","Data":"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.911620 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8h7m" event={"ID":"24965104-a4c2-41bc-90af-19b331f214f0","Type":"ContainerDied","Data":"8a253062ea02a22f77132019d187304cce40a69aa1b697c3bb33a51beb8535ec"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.911694 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8h7m" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.915667 4841 generic.go:334] "Generic (PLEG): container finished" podID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerID="a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.915722 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.915740 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerDied","Data":"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.916661 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rpzh" event={"ID":"b973cff9-c88e-4a16-923e-4ade9d371af0","Type":"ContainerDied","Data":"7fa84d47433019298887bf6e54c8df80856e3b2a92abafb4dcdf654f08c990ee"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.919561 4841 generic.go:334] "Generic (PLEG): container finished" podID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerID="81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8" exitCode=0 Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.919594 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerDied","Data":"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.919617 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lmrm" event={"ID":"b148c13b-ac9d-4df8-9960-7a98df30bc57","Type":"ContainerDied","Data":"db4eeb6899dbed08b94bc02dfa9a80c2926173a40d0e80230d8f6c649852558a"} Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.919656 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lmrm" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.926451 4841 scope.go:117] "RemoveContainer" containerID="4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.937734 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlxd2\" (UniqueName: \"kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2\") pod \"6275695a-0b4a-4e12-affd-bfabdffcf529\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.937971 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities\") pod \"6275695a-0b4a-4e12-affd-bfabdffcf529\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938008 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities\") pod \"b148c13b-ac9d-4df8-9960-7a98df30bc57\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938024 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grb29\" (UniqueName: \"kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29\") pod \"b148c13b-ac9d-4df8-9960-7a98df30bc57\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938045 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content\") pod \"b148c13b-ac9d-4df8-9960-7a98df30bc57\" (UID: \"b148c13b-ac9d-4df8-9960-7a98df30bc57\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938115 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content\") pod \"6275695a-0b4a-4e12-affd-bfabdffcf529\" (UID: \"6275695a-0b4a-4e12-affd-bfabdffcf529\") " Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938300 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938318 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbkmz\" (UniqueName: \"kubernetes.io/projected/24965104-a4c2-41bc-90af-19b331f214f0-kube-api-access-rbkmz\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938331 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24965104-a4c2-41bc-90af-19b331f214f0-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938641 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities" (OuterVolumeSpecName: "utilities") pod "b148c13b-ac9d-4df8-9960-7a98df30bc57" (UID: "b148c13b-ac9d-4df8-9960-7a98df30bc57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.938782 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities" (OuterVolumeSpecName: "utilities") pod "6275695a-0b4a-4e12-affd-bfabdffcf529" (UID: "6275695a-0b4a-4e12-affd-bfabdffcf529"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.939454 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.942950 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n8h7m"] Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.962074 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29" (OuterVolumeSpecName: "kube-api-access-grb29") pod "b148c13b-ac9d-4df8-9960-7a98df30bc57" (UID: "b148c13b-ac9d-4df8-9960-7a98df30bc57"). InnerVolumeSpecName "kube-api-access-grb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.962129 4841 scope.go:117] "RemoveContainer" containerID="3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.962691 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2" (OuterVolumeSpecName: "kube-api-access-vlxd2") pod "6275695a-0b4a-4e12-affd-bfabdffcf529" (UID: "6275695a-0b4a-4e12-affd-bfabdffcf529"). InnerVolumeSpecName "kube-api-access-vlxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:40 crc kubenswrapper[4841]: E1204 09:24:40.964852 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9\": container with ID starting with 3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9 not found: ID does not exist" containerID="3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.964904 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9"} err="failed to get container status \"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9\": rpc error: code = NotFound desc = could not find container \"3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9\": container with ID starting with 3cfeed500c9ee7c2fa0c20c9080f47dea063302d81a302c0af1ac52ef81e1ef9 not found: ID does not exist" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.964933 4841 scope.go:117] "RemoveContainer" containerID="253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2" Dec 04 09:24:40 crc kubenswrapper[4841]: E1204 09:24:40.965367 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2\": container with ID starting with 253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2 not found: ID does not exist" containerID="253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.965398 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2"} err="failed to get container status \"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2\": rpc error: code = NotFound desc = could not find container \"253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2\": container with ID starting with 253e0aacf868c47d977784f2698f0b32705f6446d8a9776a725fc8e067b212a2 not found: ID does not exist" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.965419 4841 scope.go:117] "RemoveContainer" containerID="4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122" Dec 04 09:24:40 crc kubenswrapper[4841]: E1204 09:24:40.965666 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122\": container with ID starting with 4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122 not found: ID does not exist" containerID="4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.965682 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122"} err="failed to get container status \"4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122\": rpc error: code = NotFound desc = could not find container \"4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122\": container with ID starting with 4c409681f60a1a63a72c0e68f00e466eded9c05947ce39255df4f74405720122 not found: ID does not exist" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.965693 4841 scope.go:117] "RemoveContainer" containerID="49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.983018 4841 scope.go:117] "RemoveContainer" containerID="f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf" Dec 04 09:24:40 crc kubenswrapper[4841]: I1204 09:24:40.986264 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b148c13b-ac9d-4df8-9960-7a98df30bc57" (UID: "b148c13b-ac9d-4df8-9960-7a98df30bc57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.002116 4841 scope.go:117] "RemoveContainer" containerID="dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.021032 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.036188 4841 scope.go:117] "RemoveContainer" containerID="49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.037604 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8\": container with ID starting with 49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8 not found: ID does not exist" containerID="49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.037657 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8"} err="failed to get container status \"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8\": rpc error: code = NotFound desc = could not find container \"49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8\": container with ID starting with 49ac83edbe9e2d480c8294c2942db1f144e9a151736219dc4d6489b5750dc8f8 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.037689 4841 scope.go:117] "RemoveContainer" containerID="f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.038053 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf\": container with ID starting with f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf not found: ID does not exist" containerID="f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038079 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf"} err="failed to get container status \"f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf\": rpc error: code = NotFound desc = could not find container \"f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf\": container with ID starting with f402231c24237f182d447e4a6581a1e266d8458cc6275132aa44b8b3935d1edf not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038097 4841 scope.go:117] "RemoveContainer" containerID="dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.038392 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1\": container with ID starting with dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1 not found: ID does not exist" containerID="dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038416 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1"} err="failed to get container status \"dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1\": rpc error: code = NotFound desc = could not find container \"dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1\": container with ID starting with dc0efe96e8abb84e8b34d48e76f026055276c71e34227a0c0763eddfbf18ecc1 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038433 4841 scope.go:117] "RemoveContainer" containerID="74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038815 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content\") pod \"9edf6830-72af-441c-b5ff-c9b65706dcc0\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038860 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nkn4\" (UniqueName: \"kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4\") pod \"b973cff9-c88e-4a16-923e-4ade9d371af0\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038892 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics\") pod \"b973cff9-c88e-4a16-923e-4ade9d371af0\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.038989 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities\") pod \"9edf6830-72af-441c-b5ff-c9b65706dcc0\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039012 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcjpl\" (UniqueName: \"kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl\") pod \"9edf6830-72af-441c-b5ff-c9b65706dcc0\" (UID: \"9edf6830-72af-441c-b5ff-c9b65706dcc0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039051 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca\") pod \"b973cff9-c88e-4a16-923e-4ade9d371af0\" (UID: \"b973cff9-c88e-4a16-923e-4ade9d371af0\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039267 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlxd2\" (UniqueName: \"kubernetes.io/projected/6275695a-0b4a-4e12-affd-bfabdffcf529-kube-api-access-vlxd2\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039283 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039294 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039306 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grb29\" (UniqueName: \"kubernetes.io/projected/b148c13b-ac9d-4df8-9960-7a98df30bc57-kube-api-access-grb29\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.039317 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b148c13b-ac9d-4df8-9960-7a98df30bc57-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.041284 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b973cff9-c88e-4a16-923e-4ade9d371af0" (UID: "b973cff9-c88e-4a16-923e-4ade9d371af0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.050617 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities" (OuterVolumeSpecName: "utilities") pod "9edf6830-72af-441c-b5ff-c9b65706dcc0" (UID: "9edf6830-72af-441c-b5ff-c9b65706dcc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.052206 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4" (OuterVolumeSpecName: "kube-api-access-8nkn4") pod "b973cff9-c88e-4a16-923e-4ade9d371af0" (UID: "b973cff9-c88e-4a16-923e-4ade9d371af0"). InnerVolumeSpecName "kube-api-access-8nkn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.053106 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b973cff9-c88e-4a16-923e-4ade9d371af0" (UID: "b973cff9-c88e-4a16-923e-4ade9d371af0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.059106 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl" (OuterVolumeSpecName: "kube-api-access-hcjpl") pod "9edf6830-72af-441c-b5ff-c9b65706dcc0" (UID: "9edf6830-72af-441c-b5ff-c9b65706dcc0"). InnerVolumeSpecName "kube-api-access-hcjpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.069336 4841 scope.go:117] "RemoveContainer" containerID="f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.088787 4841 scope.go:117] "RemoveContainer" containerID="5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.098481 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6275695a-0b4a-4e12-affd-bfabdffcf529" (UID: "6275695a-0b4a-4e12-affd-bfabdffcf529"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.105447 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9edf6830-72af-441c-b5ff-c9b65706dcc0" (UID: "9edf6830-72af-441c-b5ff-c9b65706dcc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.110354 4841 scope.go:117] "RemoveContainer" containerID="74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.110924 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a\": container with ID starting with 74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a not found: ID does not exist" containerID="74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.110966 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a"} err="failed to get container status \"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a\": rpc error: code = NotFound desc = could not find container \"74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a\": container with ID starting with 74aaa4ae1dc53faa996d11cd0c6154384dfc6b1caa9683f40749e24bb636a55a not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.110997 4841 scope.go:117] "RemoveContainer" containerID="f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.111627 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867\": container with ID starting with f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867 not found: ID does not exist" containerID="f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.111675 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867"} err="failed to get container status \"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867\": rpc error: code = NotFound desc = could not find container \"f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867\": container with ID starting with f65ef708a9937dbe8dbd4e0914292a1e616e1e6eadcee8c58ac771114096a867 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.111712 4841 scope.go:117] "RemoveContainer" containerID="5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.112157 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc\": container with ID starting with 5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc not found: ID does not exist" containerID="5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.112211 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc"} err="failed to get container status \"5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc\": rpc error: code = NotFound desc = could not find container \"5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc\": container with ID starting with 5f454ef4bbd03bf1e4b2c9d1ccc264d04c57add087d4f6a8eb3ea4979089b4fc not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.112257 4841 scope.go:117] "RemoveContainer" containerID="a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.118070 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kk8z"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.128516 4841 scope.go:117] "RemoveContainer" containerID="f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140447 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gc6p\" (UniqueName: \"kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p\") pod \"88461373-a228-429d-8b73-b96c7dbca2ef\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140516 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config\") pod \"88461373-a228-429d-8b73-b96c7dbca2ef\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140569 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca\") pod \"88461373-a228-429d-8b73-b96c7dbca2ef\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140603 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert\") pod \"88461373-a228-429d-8b73-b96c7dbca2ef\" (UID: \"88461373-a228-429d-8b73-b96c7dbca2ef\") " Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140870 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6275695a-0b4a-4e12-affd-bfabdffcf529-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140891 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140903 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcjpl\" (UniqueName: \"kubernetes.io/projected/9edf6830-72af-441c-b5ff-c9b65706dcc0-kube-api-access-hcjpl\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140916 4841 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140928 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9edf6830-72af-441c-b5ff-c9b65706dcc0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140939 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nkn4\" (UniqueName: \"kubernetes.io/projected/b973cff9-c88e-4a16-923e-4ade9d371af0-kube-api-access-8nkn4\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.140950 4841 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b973cff9-c88e-4a16-923e-4ade9d371af0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.142106 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config" (OuterVolumeSpecName: "config") pod "88461373-a228-429d-8b73-b96c7dbca2ef" (UID: "88461373-a228-429d-8b73-b96c7dbca2ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.142233 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "88461373-a228-429d-8b73-b96c7dbca2ef" (UID: "88461373-a228-429d-8b73-b96c7dbca2ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.143315 4841 scope.go:117] "RemoveContainer" containerID="a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.144371 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146\": container with ID starting with a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146 not found: ID does not exist" containerID="a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.144418 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146"} err="failed to get container status \"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146\": rpc error: code = NotFound desc = could not find container \"a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146\": container with ID starting with a99d1632384f1d31ad071fd0f780f19f73ad6b070a8fbb80b0dc130a5d6be146 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.144448 4841 scope.go:117] "RemoveContainer" containerID="f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.144877 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe\": container with ID starting with f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe not found: ID does not exist" containerID="f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.144894 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p" (OuterVolumeSpecName: "kube-api-access-5gc6p") pod "88461373-a228-429d-8b73-b96c7dbca2ef" (UID: "88461373-a228-429d-8b73-b96c7dbca2ef"). InnerVolumeSpecName "kube-api-access-5gc6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.144917 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe"} err="failed to get container status \"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe\": rpc error: code = NotFound desc = could not find container \"f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe\": container with ID starting with f42d56dc31d487efb76d9228bc1d9a5a0366d3a5f277b60df1bf00b4f24cddbe not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.144945 4841 scope.go:117] "RemoveContainer" containerID="81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.145779 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88461373-a228-429d-8b73-b96c7dbca2ef" (UID: "88461373-a228-429d-8b73-b96c7dbca2ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.169313 4841 scope.go:117] "RemoveContainer" containerID="9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.181829 4841 scope.go:117] "RemoveContainer" containerID="87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.213352 4841 scope.go:117] "RemoveContainer" containerID="81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.213425 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.213940 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8\": container with ID starting with 81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8 not found: ID does not exist" containerID="81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.213985 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8"} err="failed to get container status \"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8\": rpc error: code = NotFound desc = could not find container \"81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8\": container with ID starting with 81d837d342abbc7689a9f7b742c35e0019e00e6d5ffc5bf1075f69cffaa829d8 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.214015 4841 scope.go:117] "RemoveContainer" containerID="9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.214294 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384\": container with ID starting with 9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384 not found: ID does not exist" containerID="9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.214320 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384"} err="failed to get container status \"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384\": rpc error: code = NotFound desc = could not find container \"9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384\": container with ID starting with 9c09415e8248538f6b08bedf42168cae5435ea92b55d041c34e222d4d7858384 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.214331 4841 scope.go:117] "RemoveContainer" containerID="87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.214545 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7\": container with ID starting with 87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7 not found: ID does not exist" containerID="87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.214563 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7"} err="failed to get container status \"87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7\": rpc error: code = NotFound desc = could not find container \"87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7\": container with ID starting with 87b2316b2535d8ac249f152cba9089f9036a32f4a40bd5091ef0c11b613b0bc7 not found: ID does not exist" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.216471 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vw6k8"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.242707 4841 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.242747 4841 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88461373-a228-429d-8b73-b96c7dbca2ef-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.242808 4841 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88461373-a228-429d-8b73-b96c7dbca2ef-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.242827 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gc6p\" (UniqueName: \"kubernetes.io/projected/88461373-a228-429d-8b73-b96c7dbca2ef-kube-api-access-5gc6p\") on node \"crc\" DevicePath \"\"" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.294495 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.297722 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mlvxz"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.302537 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.305505 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rpzh"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.312652 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.316800 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lmrm"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.568696 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn"] Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.568952 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88461373-a228-429d-8b73-b96c7dbca2ef" containerName="route-controller-manager" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.568990 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="88461373-a228-429d-8b73-b96c7dbca2ef" containerName="route-controller-manager" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569003 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569013 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569029 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569039 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569053 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569064 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569078 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569088 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569102 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569110 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569123 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569131 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569140 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569148 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569160 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569169 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569182 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569190 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569199 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569207 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="extract-content" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569219 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569227 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569236 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569243 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569255 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569264 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="extract-utilities" Dec 04 09:24:41 crc kubenswrapper[4841]: E1204 09:24:41.569276 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569284 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569401 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569413 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="88461373-a228-429d-8b73-b96c7dbca2ef" containerName="route-controller-manager" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569427 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569437 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" containerName="marketplace-operator" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569445 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569458 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="24965104-a4c2-41bc-90af-19b331f214f0" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569468 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" containerName="registry-server" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.569899 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.580488 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn"] Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.624259 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24965104-a4c2-41bc-90af-19b331f214f0" path="/var/lib/kubelet/pods/24965104-a4c2-41bc-90af-19b331f214f0/volumes" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.625100 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6275695a-0b4a-4e12-affd-bfabdffcf529" path="/var/lib/kubelet/pods/6275695a-0b4a-4e12-affd-bfabdffcf529/volumes" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.625877 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9edf6830-72af-441c-b5ff-c9b65706dcc0" path="/var/lib/kubelet/pods/9edf6830-72af-441c-b5ff-c9b65706dcc0/volumes" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.627193 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b148c13b-ac9d-4df8-9960-7a98df30bc57" path="/var/lib/kubelet/pods/b148c13b-ac9d-4df8-9960-7a98df30bc57/volumes" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.628008 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b973cff9-c88e-4a16-923e-4ade9d371af0" path="/var/lib/kubelet/pods/b973cff9-c88e-4a16-923e-4ade9d371af0/volumes" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.753343 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-serving-cert\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.753411 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-config\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.753466 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bzf\" (UniqueName: \"kubernetes.io/projected/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-kube-api-access-c9bzf\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.753487 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-client-ca\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.854445 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-serving-cert\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.854778 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-config\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.854841 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bzf\" (UniqueName: \"kubernetes.io/projected/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-kube-api-access-c9bzf\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.854870 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-client-ca\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.855850 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-client-ca\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.855927 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-config\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.857787 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-serving-cert\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.876553 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bzf\" (UniqueName: \"kubernetes.io/projected/9f3eb5f8-a7cf-479c-9374-5148ca9c0654-kube-api-access-c9bzf\") pod \"route-controller-manager-99655dc9d-gvqnn\" (UID: \"9f3eb5f8-a7cf-479c-9374-5148ca9c0654\") " pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.882342 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.930813 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" event={"ID":"88461373-a228-429d-8b73-b96c7dbca2ef","Type":"ContainerDied","Data":"b48c52bfb0c10bc81ff690af99d6df03d25a1b72851cdee15102df7479aca8b9"} Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.931013 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.931104 4841 scope.go:117] "RemoveContainer" containerID="75e8933ef01af65fde64f6cf5782a95ae80dd9ae9f7501f31bf87ee1f29a15f9" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.936207 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" event={"ID":"d3605f15-1f3c-4177-9401-1cb41d6d417b","Type":"ContainerStarted","Data":"b2f5bfb65cc46ae7fffe79fe2667a6ebff44594e05ca3455723aa11a0a099d35"} Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.936256 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" event={"ID":"d3605f15-1f3c-4177-9401-1cb41d6d417b","Type":"ContainerStarted","Data":"2c71026c6238459e13560029e0f5ff571c4f0fd43c95a1e849f6d63a34a86cb3"} Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.936682 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.937660 4841 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kk8z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.937714 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" podUID="d3605f15-1f3c-4177-9401-1cb41d6d417b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Dec 04 09:24:41 crc kubenswrapper[4841]: I1204 09:24:41.958577 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" podStartSLOduration=1.9585578259999998 podStartE2EDuration="1.958557826s" podCreationTimestamp="2025-12-04 09:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:24:41.958120774 +0000 UTC m=+348.709910978" watchObservedRunningTime="2025-12-04 09:24:41.958557826 +0000 UTC m=+348.710348040" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.009099 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.015682 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4c4ddfdb-rs5s9"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.296066 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hb7jm"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.297660 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.299675 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.316659 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb7jm"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.347343 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.460274 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4w6\" (UniqueName: \"kubernetes.io/projected/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-kube-api-access-5k4w6\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.460352 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-catalog-content\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.460398 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-utilities\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.498647 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.499567 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.501711 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.509588 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.561626 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-catalog-content\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.561702 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-utilities\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.561791 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k4w6\" (UniqueName: \"kubernetes.io/projected/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-kube-api-access-5k4w6\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.562164 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-utilities\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.562220 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-catalog-content\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.580088 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k4w6\" (UniqueName: \"kubernetes.io/projected/d2c87b0b-8ee5-4c87-90da-1dbba059e5aa-kube-api-access-5k4w6\") pod \"community-operators-hb7jm\" (UID: \"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa\") " pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.618272 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.662470 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.662544 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.662575 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ft6\" (UniqueName: \"kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.763491 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.763845 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.763882 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ft6\" (UniqueName: \"kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.764160 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.764503 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.786963 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ft6\" (UniqueName: \"kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6\") pod \"redhat-marketplace-qn7mb\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.815598 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.960890 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" event={"ID":"9f3eb5f8-a7cf-479c-9374-5148ca9c0654","Type":"ContainerStarted","Data":"b08dfa377506c7ec8e93fa543c758c89290d4300474c00f8c5433bbf272c424e"} Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.960937 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" event={"ID":"9f3eb5f8-a7cf-479c-9374-5148ca9c0654","Type":"ContainerStarted","Data":"1cc0cad43258eb84723aa0b11014c392c19c52d5e0e34debd18e4bbd1ae9b3df"} Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.961515 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:42 crc kubenswrapper[4841]: I1204 09:24:42.969555 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6kk8z" Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.002031 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" podStartSLOduration=3.002009416 podStartE2EDuration="3.002009416s" podCreationTimestamp="2025-12-04 09:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:24:42.979982789 +0000 UTC m=+349.731773003" watchObservedRunningTime="2025-12-04 09:24:43.002009416 +0000 UTC m=+349.753799620" Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.036041 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb7jm"] Dec 04 09:24:43 crc kubenswrapper[4841]: W1204 09:24:43.040569 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c87b0b_8ee5_4c87_90da_1dbba059e5aa.slice/crio-34d7a8502d490db29f4eb97ee74e3a55334ed3ee856f4905171effbb0ab60ce3 WatchSource:0}: Error finding container 34d7a8502d490db29f4eb97ee74e3a55334ed3ee856f4905171effbb0ab60ce3: Status 404 returned error can't find the container with id 34d7a8502d490db29f4eb97ee74e3a55334ed3ee856f4905171effbb0ab60ce3 Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.054286 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-99655dc9d-gvqnn" Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.201777 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:24:43 crc kubenswrapper[4841]: W1204 09:24:43.209104 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f6574a_c54e_4652_adca_674d077a3282.slice/crio-8d3c6a778a1574e8c87a24ce798d2a3545875e2fec5dc95bfed21d018be5f825 WatchSource:0}: Error finding container 8d3c6a778a1574e8c87a24ce798d2a3545875e2fec5dc95bfed21d018be5f825: Status 404 returned error can't find the container with id 8d3c6a778a1574e8c87a24ce798d2a3545875e2fec5dc95bfed21d018be5f825 Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.637197 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88461373-a228-429d-8b73-b96c7dbca2ef" path="/var/lib/kubelet/pods/88461373-a228-429d-8b73-b96c7dbca2ef/volumes" Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.975062 4841 generic.go:334] "Generic (PLEG): container finished" podID="d2c87b0b-8ee5-4c87-90da-1dbba059e5aa" containerID="ccda2fa282d6b0657a15ac353d1f7994f88b461d3493ed3c0876c9d21c3acb7d" exitCode=0 Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.975307 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb7jm" event={"ID":"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa","Type":"ContainerDied","Data":"ccda2fa282d6b0657a15ac353d1f7994f88b461d3493ed3c0876c9d21c3acb7d"} Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.975376 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb7jm" event={"ID":"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa","Type":"ContainerStarted","Data":"34d7a8502d490db29f4eb97ee74e3a55334ed3ee856f4905171effbb0ab60ce3"} Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.979114 4841 generic.go:334] "Generic (PLEG): container finished" podID="a5f6574a-c54e-4652-adca-674d077a3282" containerID="705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84" exitCode=0 Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.979234 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerDied","Data":"705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84"} Dec 04 09:24:43 crc kubenswrapper[4841]: I1204 09:24:43.979712 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerStarted","Data":"8d3c6a778a1574e8c87a24ce798d2a3545875e2fec5dc95bfed21d018be5f825"} Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.703786 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dvmfx"] Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.710894 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.712738 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dvmfx"] Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.714004 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.790845 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-utilities\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.791186 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-catalog-content\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.791217 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlk7k\" (UniqueName: \"kubernetes.io/projected/29e9cd14-c26a-46e1-a360-a581ae897e94-kube-api-access-xlk7k\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.892795 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-catalog-content\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.892840 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlk7k\" (UniqueName: \"kubernetes.io/projected/29e9cd14-c26a-46e1-a360-a581ae897e94-kube-api-access-xlk7k\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.892885 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-utilities\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.893604 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-utilities\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.893619 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9cd14-c26a-46e1-a360-a581ae897e94-catalog-content\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.903972 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btlzl"] Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.904908 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.908844 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.918725 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlk7k\" (UniqueName: \"kubernetes.io/projected/29e9cd14-c26a-46e1-a360-a581ae897e94-kube-api-access-xlk7k\") pod \"certified-operators-dvmfx\" (UID: \"29e9cd14-c26a-46e1-a360-a581ae897e94\") " pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.920062 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btlzl"] Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.987550 4841 generic.go:334] "Generic (PLEG): container finished" podID="d2c87b0b-8ee5-4c87-90da-1dbba059e5aa" containerID="2b9fecfef7497269df47fb8ece80fadcda331b5807db16683ea599c171fd45fc" exitCode=0 Dec 04 09:24:44 crc kubenswrapper[4841]: I1204 09:24:44.988366 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb7jm" event={"ID":"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa","Type":"ContainerDied","Data":"2b9fecfef7497269df47fb8ece80fadcda331b5807db16683ea599c171fd45fc"} Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:44.999980 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-utilities\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.000104 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-catalog-content\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.000143 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/f8e3c550-3036-46e6-9851-3f09bdcfa65f-kube-api-access-nc4b5\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.035847 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.101251 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-utilities\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.101340 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-catalog-content\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.101369 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/f8e3c550-3036-46e6-9851-3f09bdcfa65f-kube-api-access-nc4b5\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.101691 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-utilities\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.102260 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e3c550-3036-46e6-9851-3f09bdcfa65f-catalog-content\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.125506 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/f8e3c550-3036-46e6-9851-3f09bdcfa65f-kube-api-access-nc4b5\") pod \"redhat-operators-btlzl\" (UID: \"f8e3c550-3036-46e6-9851-3f09bdcfa65f\") " pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.280440 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.434134 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dvmfx"] Dec 04 09:24:45 crc kubenswrapper[4841]: W1204 09:24:45.444697 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e9cd14_c26a_46e1_a360_a581ae897e94.slice/crio-03333c9cae232875b6cfc8e38658d8e9a24c96fbeb00a81b5297f9906310aa65 WatchSource:0}: Error finding container 03333c9cae232875b6cfc8e38658d8e9a24c96fbeb00a81b5297f9906310aa65: Status 404 returned error can't find the container with id 03333c9cae232875b6cfc8e38658d8e9a24c96fbeb00a81b5297f9906310aa65 Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.700668 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btlzl"] Dec 04 09:24:45 crc kubenswrapper[4841]: W1204 09:24:45.726587 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8e3c550_3036_46e6_9851_3f09bdcfa65f.slice/crio-1d11f6e7215591fb48997afec066d8070fae894ebacd666be188a528083ce5a7 WatchSource:0}: Error finding container 1d11f6e7215591fb48997afec066d8070fae894ebacd666be188a528083ce5a7: Status 404 returned error can't find the container with id 1d11f6e7215591fb48997afec066d8070fae894ebacd666be188a528083ce5a7 Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.993288 4841 generic.go:334] "Generic (PLEG): container finished" podID="f8e3c550-3036-46e6-9851-3f09bdcfa65f" containerID="9653fbda1e1a2bec89b368abc5cf54566043048551fc3bb2e0a6eec9acf3253b" exitCode=0 Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.993358 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlzl" event={"ID":"f8e3c550-3036-46e6-9851-3f09bdcfa65f","Type":"ContainerDied","Data":"9653fbda1e1a2bec89b368abc5cf54566043048551fc3bb2e0a6eec9acf3253b"} Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.993387 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlzl" event={"ID":"f8e3c550-3036-46e6-9851-3f09bdcfa65f","Type":"ContainerStarted","Data":"1d11f6e7215591fb48997afec066d8070fae894ebacd666be188a528083ce5a7"} Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.995116 4841 generic.go:334] "Generic (PLEG): container finished" podID="29e9cd14-c26a-46e1-a360-a581ae897e94" containerID="39dbcb5542d10a1a47ceff750745f67c574f72a58629f1c1233941b00fff8949" exitCode=0 Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.995166 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvmfx" event={"ID":"29e9cd14-c26a-46e1-a360-a581ae897e94","Type":"ContainerDied","Data":"39dbcb5542d10a1a47ceff750745f67c574f72a58629f1c1233941b00fff8949"} Dec 04 09:24:45 crc kubenswrapper[4841]: I1204 09:24:45.995182 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvmfx" event={"ID":"29e9cd14-c26a-46e1-a360-a581ae897e94","Type":"ContainerStarted","Data":"03333c9cae232875b6cfc8e38658d8e9a24c96fbeb00a81b5297f9906310aa65"} Dec 04 09:24:46 crc kubenswrapper[4841]: I1204 09:24:46.001266 4841 generic.go:334] "Generic (PLEG): container finished" podID="a5f6574a-c54e-4652-adca-674d077a3282" containerID="766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f" exitCode=0 Dec 04 09:24:46 crc kubenswrapper[4841]: I1204 09:24:46.001322 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerDied","Data":"766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f"} Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.014252 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvmfx" event={"ID":"29e9cd14-c26a-46e1-a360-a581ae897e94","Type":"ContainerStarted","Data":"a68f1ee8a09788f48c1c652307381dd8eb1591e4f04067c98479b5fc65689ef5"} Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.027181 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlzl" event={"ID":"f8e3c550-3036-46e6-9851-3f09bdcfa65f","Type":"ContainerStarted","Data":"34793236b325a12f66c9f43798cf94405fb1a53e7f11a73a31606eff7b5cc40f"} Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.045302 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb7jm" event={"ID":"d2c87b0b-8ee5-4c87-90da-1dbba059e5aa","Type":"ContainerStarted","Data":"8bfd8ee674cd216b7fa78e754826504a3e09ff7a6e84dc63cefb0887ca6da7f1"} Dec 04 09:24:47 crc kubenswrapper[4841]: E1204 09:24:47.046585 4841 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e9cd14_c26a_46e1_a360_a581ae897e94.slice/crio-a68f1ee8a09788f48c1c652307381dd8eb1591e4f04067c98479b5fc65689ef5.scope\": RecentStats: unable to find data in memory cache]" Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.052210 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerStarted","Data":"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1"} Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.076115 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hb7jm" podStartSLOduration=2.674601218 podStartE2EDuration="5.076098969s" podCreationTimestamp="2025-12-04 09:24:42 +0000 UTC" firstStartedPulling="2025-12-04 09:24:43.977841414 +0000 UTC m=+350.729631618" lastFinishedPulling="2025-12-04 09:24:46.379339165 +0000 UTC m=+353.131129369" observedRunningTime="2025-12-04 09:24:47.073043237 +0000 UTC m=+353.824833431" watchObservedRunningTime="2025-12-04 09:24:47.076098969 +0000 UTC m=+353.827889173" Dec 04 09:24:47 crc kubenswrapper[4841]: I1204 09:24:47.113315 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qn7mb" podStartSLOduration=2.679642453 podStartE2EDuration="5.113298472s" podCreationTimestamp="2025-12-04 09:24:42 +0000 UTC" firstStartedPulling="2025-12-04 09:24:43.983182976 +0000 UTC m=+350.734973220" lastFinishedPulling="2025-12-04 09:24:46.416839035 +0000 UTC m=+353.168629239" observedRunningTime="2025-12-04 09:24:47.110572409 +0000 UTC m=+353.862362633" watchObservedRunningTime="2025-12-04 09:24:47.113298472 +0000 UTC m=+353.865088676" Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.060863 4841 generic.go:334] "Generic (PLEG): container finished" podID="f8e3c550-3036-46e6-9851-3f09bdcfa65f" containerID="34793236b325a12f66c9f43798cf94405fb1a53e7f11a73a31606eff7b5cc40f" exitCode=0 Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.061199 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlzl" event={"ID":"f8e3c550-3036-46e6-9851-3f09bdcfa65f","Type":"ContainerDied","Data":"34793236b325a12f66c9f43798cf94405fb1a53e7f11a73a31606eff7b5cc40f"} Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.063970 4841 generic.go:334] "Generic (PLEG): container finished" podID="29e9cd14-c26a-46e1-a360-a581ae897e94" containerID="a68f1ee8a09788f48c1c652307381dd8eb1591e4f04067c98479b5fc65689ef5" exitCode=0 Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.064965 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvmfx" event={"ID":"29e9cd14-c26a-46e1-a360-a581ae897e94","Type":"ContainerDied","Data":"a68f1ee8a09788f48c1c652307381dd8eb1591e4f04067c98479b5fc65689ef5"} Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.065000 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvmfx" event={"ID":"29e9cd14-c26a-46e1-a360-a581ae897e94","Type":"ContainerStarted","Data":"5d87cf91664ea20434dffaf9fc8be2594816cd62d7f25c2c7376ceb0a12e3c37"} Dec 04 09:24:48 crc kubenswrapper[4841]: I1204 09:24:48.101731 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dvmfx" podStartSLOduration=2.575160769 podStartE2EDuration="4.101713674s" podCreationTimestamp="2025-12-04 09:24:44 +0000 UTC" firstStartedPulling="2025-12-04 09:24:45.996584767 +0000 UTC m=+352.748374971" lastFinishedPulling="2025-12-04 09:24:47.523137672 +0000 UTC m=+354.274927876" observedRunningTime="2025-12-04 09:24:48.099331471 +0000 UTC m=+354.851121675" watchObservedRunningTime="2025-12-04 09:24:48.101713674 +0000 UTC m=+354.853503878" Dec 04 09:24:50 crc kubenswrapper[4841]: I1204 09:24:50.076324 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlzl" event={"ID":"f8e3c550-3036-46e6-9851-3f09bdcfa65f","Type":"ContainerStarted","Data":"564d248893e62845178f649012dae1d9726140f4a02314b3dff7d6be996be6a9"} Dec 04 09:24:50 crc kubenswrapper[4841]: I1204 09:24:50.095529 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btlzl" podStartSLOduration=3.640332249 podStartE2EDuration="6.095508741s" podCreationTimestamp="2025-12-04 09:24:44 +0000 UTC" firstStartedPulling="2025-12-04 09:24:45.994711528 +0000 UTC m=+352.746501732" lastFinishedPulling="2025-12-04 09:24:48.44988802 +0000 UTC m=+355.201678224" observedRunningTime="2025-12-04 09:24:50.091753052 +0000 UTC m=+356.843543266" watchObservedRunningTime="2025-12-04 09:24:50.095508741 +0000 UTC m=+356.847298945" Dec 04 09:24:50 crc kubenswrapper[4841]: I1204 09:24:50.497719 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:24:50 crc kubenswrapper[4841]: I1204 09:24:50.497810 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.619074 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.619150 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.663193 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.816688 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.816777 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:52 crc kubenswrapper[4841]: I1204 09:24:52.860634 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:53 crc kubenswrapper[4841]: I1204 09:24:53.132261 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hb7jm" Dec 04 09:24:53 crc kubenswrapper[4841]: I1204 09:24:53.151432 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.036164 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.037674 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.077594 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.138210 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dvmfx" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.281562 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.282043 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:55 crc kubenswrapper[4841]: I1204 09:24:55.326005 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:24:56 crc kubenswrapper[4841]: I1204 09:24:56.142678 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btlzl" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.399444 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jk8tm"] Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.400910 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.412879 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jk8tm"] Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.532715 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45c404a9-97f6-4ef8-9b7d-b5af00f93495-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.532790 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.532957 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h5dm\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-kube-api-access-4h5dm\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.533146 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-tls\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.533229 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-bound-sa-token\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.533295 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45c404a9-97f6-4ef8-9b7d-b5af00f93495-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.533390 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-certificates\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.533420 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-trusted-ca\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.556067 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634646 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-tls\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634695 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45c404a9-97f6-4ef8-9b7d-b5af00f93495-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634721 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-bound-sa-token\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634747 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-certificates\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634788 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-trusted-ca\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634848 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45c404a9-97f6-4ef8-9b7d-b5af00f93495-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.634875 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h5dm\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-kube-api-access-4h5dm\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.635857 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45c404a9-97f6-4ef8-9b7d-b5af00f93495-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.636576 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-trusted-ca\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.636709 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-certificates\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.640424 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45c404a9-97f6-4ef8-9b7d-b5af00f93495-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.650480 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-registry-tls\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.656590 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-bound-sa-token\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.661452 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h5dm\" (UniqueName: \"kubernetes.io/projected/45c404a9-97f6-4ef8-9b7d-b5af00f93495-kube-api-access-4h5dm\") pod \"image-registry-66df7c8f76-jk8tm\" (UID: \"45c404a9-97f6-4ef8-9b7d-b5af00f93495\") " pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:01 crc kubenswrapper[4841]: I1204 09:25:01.718861 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:02 crc kubenswrapper[4841]: I1204 09:25:02.133467 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jk8tm"] Dec 04 09:25:02 crc kubenswrapper[4841]: W1204 09:25:02.136319 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c404a9_97f6_4ef8_9b7d_b5af00f93495.slice/crio-2cb7539df31b2d86df29460490e15b4f4d82b679485258738c487ef93cf014dc WatchSource:0}: Error finding container 2cb7539df31b2d86df29460490e15b4f4d82b679485258738c487ef93cf014dc: Status 404 returned error can't find the container with id 2cb7539df31b2d86df29460490e15b4f4d82b679485258738c487ef93cf014dc Dec 04 09:25:03 crc kubenswrapper[4841]: I1204 09:25:03.141193 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" event={"ID":"45c404a9-97f6-4ef8-9b7d-b5af00f93495","Type":"ContainerStarted","Data":"aa977ec232b1cf1332e32b0b10c011fcc0547dd42bb1b467a88822efb433de71"} Dec 04 09:25:03 crc kubenswrapper[4841]: I1204 09:25:03.141597 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" event={"ID":"45c404a9-97f6-4ef8-9b7d-b5af00f93495","Type":"ContainerStarted","Data":"2cb7539df31b2d86df29460490e15b4f4d82b679485258738c487ef93cf014dc"} Dec 04 09:25:03 crc kubenswrapper[4841]: I1204 09:25:03.141663 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:03 crc kubenswrapper[4841]: I1204 09:25:03.167298 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" podStartSLOduration=2.167274699 podStartE2EDuration="2.167274699s" podCreationTimestamp="2025-12-04 09:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:25:03.165913793 +0000 UTC m=+369.917704037" watchObservedRunningTime="2025-12-04 09:25:03.167274699 +0000 UTC m=+369.919064923" Dec 04 09:25:20 crc kubenswrapper[4841]: I1204 09:25:20.497874 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:25:20 crc kubenswrapper[4841]: I1204 09:25:20.498486 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:25:21 crc kubenswrapper[4841]: I1204 09:25:21.728017 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jk8tm" Dec 04 09:25:21 crc kubenswrapper[4841]: I1204 09:25:21.810232 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:25:46 crc kubenswrapper[4841]: I1204 09:25:46.863530 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" podUID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" containerName="registry" containerID="cri-o://03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5" gracePeriod=30 Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.339643 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.433965 4841 generic.go:334] "Generic (PLEG): container finished" podID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" containerID="03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5" exitCode=0 Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.434078 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" event={"ID":"c9eeef2c-8b36-4fea-86d7-5732fad3d501","Type":"ContainerDied","Data":"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5"} Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.434275 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" event={"ID":"c9eeef2c-8b36-4fea-86d7-5732fad3d501","Type":"ContainerDied","Data":"6a12f59d1bf0e4cd171fe039ce6db454193972f6af1f40972be9fdfe40af428a"} Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.434111 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x6s7w" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.434300 4841 scope.go:117] "RemoveContainer" containerID="03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.460445 4841 scope.go:117] "RemoveContainer" containerID="03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5" Dec 04 09:25:48 crc kubenswrapper[4841]: E1204 09:25:48.461044 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5\": container with ID starting with 03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5 not found: ID does not exist" containerID="03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.461089 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5"} err="failed to get container status \"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5\": rpc error: code = NotFound desc = could not find container \"03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5\": container with ID starting with 03bc0238aa70955e895ead0540526fd64cbe0854fc6d484bb003ceef20a95fb5 not found: ID does not exist" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525060 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525139 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525191 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525247 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525304 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525469 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525562 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.525616 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lck4j\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j\") pod \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\" (UID: \"c9eeef2c-8b36-4fea-86d7-5732fad3d501\") " Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.526812 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.527073 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.539424 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j" (OuterVolumeSpecName: "kube-api-access-lck4j") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "kube-api-access-lck4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.539870 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.540528 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.545919 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.548474 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.566531 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c9eeef2c-8b36-4fea-86d7-5732fad3d501" (UID: "c9eeef2c-8b36-4fea-86d7-5732fad3d501"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.627519 4841 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.627844 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lck4j\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-kube-api-access-lck4j\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.627993 4841 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eeef2c-8b36-4fea-86d7-5732fad3d501-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.628029 4841 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.628050 4841 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c9eeef2c-8b36-4fea-86d7-5732fad3d501-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.628068 4841 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c9eeef2c-8b36-4fea-86d7-5732fad3d501-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.628085 4841 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eeef2c-8b36-4fea-86d7-5732fad3d501-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.791038 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:25:48 crc kubenswrapper[4841]: I1204 09:25:48.798302 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x6s7w"] Dec 04 09:25:49 crc kubenswrapper[4841]: I1204 09:25:49.630108 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" path="/var/lib/kubelet/pods/c9eeef2c-8b36-4fea-86d7-5732fad3d501/volumes" Dec 04 09:25:50 crc kubenswrapper[4841]: I1204 09:25:50.497840 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:25:50 crc kubenswrapper[4841]: I1204 09:25:50.497930 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:25:50 crc kubenswrapper[4841]: I1204 09:25:50.497996 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:25:50 crc kubenswrapper[4841]: I1204 09:25:50.498898 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:25:50 crc kubenswrapper[4841]: I1204 09:25:50.499000 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2" gracePeriod=600 Dec 04 09:25:51 crc kubenswrapper[4841]: I1204 09:25:51.457691 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2" exitCode=0 Dec 04 09:25:51 crc kubenswrapper[4841]: I1204 09:25:51.457755 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2"} Dec 04 09:25:51 crc kubenswrapper[4841]: I1204 09:25:51.458181 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e"} Dec 04 09:25:51 crc kubenswrapper[4841]: I1204 09:25:51.458216 4841 scope.go:117] "RemoveContainer" containerID="e8d5c8705007219c8f44ccedc6d132b494ce95a74eb64429305a825a470d4e0e" Dec 04 09:27:50 crc kubenswrapper[4841]: I1204 09:27:50.498076 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:27:50 crc kubenswrapper[4841]: I1204 09:27:50.498821 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:28:20 crc kubenswrapper[4841]: I1204 09:28:20.498390 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:28:20 crc kubenswrapper[4841]: I1204 09:28:20.499922 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:28:50 crc kubenswrapper[4841]: I1204 09:28:50.497441 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:28:50 crc kubenswrapper[4841]: I1204 09:28:50.497944 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:28:50 crc kubenswrapper[4841]: I1204 09:28:50.497990 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:28:50 crc kubenswrapper[4841]: I1204 09:28:50.498540 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:28:50 crc kubenswrapper[4841]: I1204 09:28:50.498594 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e" gracePeriod=600 Dec 04 09:28:51 crc kubenswrapper[4841]: I1204 09:28:51.639172 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e" exitCode=0 Dec 04 09:28:51 crc kubenswrapper[4841]: I1204 09:28:51.639204 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e"} Dec 04 09:28:51 crc kubenswrapper[4841]: I1204 09:28:51.639573 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76"} Dec 04 09:28:51 crc kubenswrapper[4841]: I1204 09:28:51.639592 4841 scope.go:117] "RemoveContainer" containerID="5b94bd0cc51675d9a003d35873a29c70ee931aa33d6acdd0ac23ceb1766effc2" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.092207 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hhkwl"] Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093393 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-controller" containerID="cri-o://2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093526 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="sbdb" containerID="cri-o://c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093600 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="nbdb" containerID="cri-o://614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093657 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="northd" containerID="cri-o://98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093692 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093726 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-node" containerID="cri-o://3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.093783 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-acl-logging" containerID="cri-o://1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.185353 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" containerID="cri-o://700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" gracePeriod=30 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.445173 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/3.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.447920 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovn-acl-logging/0.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.448419 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovn-controller/0.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.448979 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504297 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4vldw"] Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504559 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504583 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504598 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="sbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504609 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="sbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504622 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504632 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504641 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kubecfg-setup" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504649 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kubecfg-setup" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504660 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504668 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504679 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504687 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504697 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504705 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504720 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="northd" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504728 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="northd" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504738 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="nbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504746 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="nbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504758 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504789 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504800 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-node" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504808 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-node" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504819 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-acl-logging" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504827 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-acl-logging" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.504837 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" containerName="registry" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504845 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" containerName="registry" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504974 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504987 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="northd" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.504998 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-acl-logging" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505010 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505023 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505033 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9eeef2c-8b36-4fea-86d7-5732fad3d501" containerName="registry" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505044 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="kube-rbac-proxy-node" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505054 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovn-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505066 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505073 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="sbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505083 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505092 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="nbdb" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.505209 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505218 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.505341 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerName="ovnkube-controller" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.507368 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614075 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614125 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614158 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614202 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614222 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614251 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614272 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614295 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614338 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614361 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614382 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614407 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614429 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614462 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614482 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614505 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614541 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78nlx\" (UniqueName: \"kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614578 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614602 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614642 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch\") pod \"c56a9daa-a941-4d89-abd0-b7f0472ee869\" (UID: \"c56a9daa-a941-4d89-abd0-b7f0472ee869\") " Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614797 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-bin\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614828 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-etc-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614860 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-env-overrides\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614882 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-node-log\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614906 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614933 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614955 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-netns\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.614981 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-config\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615006 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-kubelet\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615030 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-ovn\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615049 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-netd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615070 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-systemd-units\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615102 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615124 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-slash\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615153 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-systemd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615175 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzn7j\" (UniqueName: \"kubernetes.io/projected/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-kube-api-access-jzn7j\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615206 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovn-node-metrics-cert\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615230 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-var-lib-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615260 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-log-socket\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615288 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-script-lib\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.615455 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616290 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log" (OuterVolumeSpecName: "node-log") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616354 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616381 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616426 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616458 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616474 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616494 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616554 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash" (OuterVolumeSpecName: "host-slash") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616586 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616625 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616654 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616659 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.616742 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket" (OuterVolumeSpecName: "log-socket") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.617318 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.617339 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.617440 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.622592 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx" (OuterVolumeSpecName: "kube-api-access-78nlx") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "kube-api-access-78nlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.622901 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.637270 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c56a9daa-a941-4d89-abd0-b7f0472ee869" (UID: "c56a9daa-a941-4d89-abd0-b7f0472ee869"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716376 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716451 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716520 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-slash\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716584 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-systemd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716613 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzn7j\" (UniqueName: \"kubernetes.io/projected/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-kube-api-access-jzn7j\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716660 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovn-node-metrics-cert\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716687 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-var-lib-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716737 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-log-socket\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716832 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-script-lib\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716847 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-systemd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716884 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-bin\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718866 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-etc-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718908 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-env-overrides\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718936 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-node-log\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718963 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718993 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-netns\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719009 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719039 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-config\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719068 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-kubelet\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719094 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-ovn\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719110 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-netd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719129 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-systemd-units\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719237 4841 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719259 4841 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719269 4841 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719283 4841 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719292 4841 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719305 4841 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-log-socket\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719314 4841 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719324 4841 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719333 4841 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719345 4841 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719354 4841 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-node-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719363 4841 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-slash\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719371 4841 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719383 4841 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719393 4841 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719407 4841 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719429 4841 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719445 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78nlx\" (UniqueName: \"kubernetes.io/projected/c56a9daa-a941-4d89-abd0-b7f0472ee869-kube-api-access-78nlx\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719456 4841 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56a9daa-a941-4d89-abd0-b7f0472ee869-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719465 4841 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56a9daa-a941-4d89-abd0-b7f0472ee869-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719507 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-systemd-units\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719540 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-slash\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.716927 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-bin\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.719586 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-etc-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718502 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-var-lib-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720022 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-env-overrides\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720019 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-openvswitch\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.718570 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-log-socket\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720159 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-node-log\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720223 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720280 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-run-netns\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720337 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-run-ovn\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720411 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-kubelet\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720409 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-script-lib\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720512 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovnkube-config\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.720801 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-host-cni-netd\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.722683 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-ovn-node-metrics-cert\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.739627 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzn7j\" (UniqueName: \"kubernetes.io/projected/a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a-kube-api-access-jzn7j\") pod \"ovnkube-node-4vldw\" (UID: \"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.824692 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:42 crc kubenswrapper[4841]: W1204 09:29:42.857001 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07a9d53_5b2e_4d9a_9a72_344a77f2ae0a.slice/crio-7756087b42e6a3ec81d3b12b55362661d7f10ebbe4058e3d39121363082ce858 WatchSource:0}: Error finding container 7756087b42e6a3ec81d3b12b55362661d7f10ebbe4058e3d39121363082ce858: Status 404 returned error can't find the container with id 7756087b42e6a3ec81d3b12b55362661d7f10ebbe4058e3d39121363082ce858 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.973361 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovnkube-controller/3.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.975176 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovn-acl-logging/0.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.975507 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hhkwl_c56a9daa-a941-4d89-abd0-b7f0472ee869/ovn-controller/0.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.975982 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976003 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976011 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976019 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976026 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976033 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" exitCode=0 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976039 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" exitCode=143 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976045 4841 generic.go:334] "Generic (PLEG): container finished" podID="c56a9daa-a941-4d89-abd0-b7f0472ee869" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" exitCode=143 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976079 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976161 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976162 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976187 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976175 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976287 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976305 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976314 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976325 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976336 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976343 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976351 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976356 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976361 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976366 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976372 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976377 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976384 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976392 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976398 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976403 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976408 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976413 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976418 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976422 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976427 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976432 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976437 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976443 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976451 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976457 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976461 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976466 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976472 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976477 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976482 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976487 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976494 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976499 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976506 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhkwl" event={"ID":"c56a9daa-a941-4d89-abd0-b7f0472ee869","Type":"ContainerDied","Data":"e2667ad58bda1502145ffa75b12e794bdefd881b89fc35f87e4ee3db9f0bf6f8"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976514 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976520 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976525 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976531 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976536 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976542 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976547 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976552 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976557 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.976562 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.977558 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"7756087b42e6a3ec81d3b12b55362661d7f10ebbe4058e3d39121363082ce858"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.980621 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/2.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.980950 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/1.log" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.980970 4841 generic.go:334] "Generic (PLEG): container finished" podID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" containerID="f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db" exitCode=2 Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.980987 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerDied","Data":"f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.980999 4841 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66"} Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.981442 4841 scope.go:117] "RemoveContainer" containerID="f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db" Dec 04 09:29:42 crc kubenswrapper[4841]: E1204 09:29:42.981595 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-76xdk_openshift-multus(86bfe6c3-d06e-40b1-9801-74abeb07ae15)\"" pod="openshift-multus/multus-76xdk" podUID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" Dec 04 09:29:42 crc kubenswrapper[4841]: I1204 09:29:42.999734 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.070482 4841 scope.go:117] "RemoveContainer" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.083163 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hhkwl"] Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.087057 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hhkwl"] Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.095419 4841 scope.go:117] "RemoveContainer" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.106678 4841 scope.go:117] "RemoveContainer" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.115792 4841 scope.go:117] "RemoveContainer" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.124676 4841 scope.go:117] "RemoveContainer" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.135830 4841 scope.go:117] "RemoveContainer" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.145500 4841 scope.go:117] "RemoveContainer" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.157867 4841 scope.go:117] "RemoveContainer" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.171288 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.171629 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.171670 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} err="failed to get container status \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.171696 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.172028 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": container with ID starting with 459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf not found: ID does not exist" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172071 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} err="failed to get container status \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": rpc error: code = NotFound desc = could not find container \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": container with ID starting with 459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172102 4841 scope.go:117] "RemoveContainer" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.172361 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": container with ID starting with c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c not found: ID does not exist" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172390 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} err="failed to get container status \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": rpc error: code = NotFound desc = could not find container \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": container with ID starting with c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172408 4841 scope.go:117] "RemoveContainer" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.172740 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": container with ID starting with 614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28 not found: ID does not exist" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172787 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} err="failed to get container status \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": rpc error: code = NotFound desc = could not find container \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": container with ID starting with 614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.172808 4841 scope.go:117] "RemoveContainer" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.173201 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": container with ID starting with 98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890 not found: ID does not exist" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173231 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} err="failed to get container status \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": rpc error: code = NotFound desc = could not find container \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": container with ID starting with 98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173251 4841 scope.go:117] "RemoveContainer" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.173499 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": container with ID starting with aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54 not found: ID does not exist" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173533 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} err="failed to get container status \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": rpc error: code = NotFound desc = could not find container \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": container with ID starting with aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173553 4841 scope.go:117] "RemoveContainer" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.173841 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": container with ID starting with 3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454 not found: ID does not exist" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173873 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} err="failed to get container status \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": rpc error: code = NotFound desc = could not find container \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": container with ID starting with 3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.173891 4841 scope.go:117] "RemoveContainer" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.174211 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": container with ID starting with 1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899 not found: ID does not exist" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174243 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} err="failed to get container status \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": rpc error: code = NotFound desc = could not find container \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": container with ID starting with 1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174263 4841 scope.go:117] "RemoveContainer" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.174538 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": container with ID starting with 2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387 not found: ID does not exist" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174562 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} err="failed to get container status \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": rpc error: code = NotFound desc = could not find container \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": container with ID starting with 2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174577 4841 scope.go:117] "RemoveContainer" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: E1204 09:29:43.174887 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": container with ID starting with 1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a not found: ID does not exist" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174911 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} err="failed to get container status \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": rpc error: code = NotFound desc = could not find container \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": container with ID starting with 1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.174923 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175193 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} err="failed to get container status \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175216 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175475 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} err="failed to get container status \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": rpc error: code = NotFound desc = could not find container \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": container with ID starting with 459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175494 4841 scope.go:117] "RemoveContainer" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175699 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} err="failed to get container status \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": rpc error: code = NotFound desc = could not find container \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": container with ID starting with c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.175721 4841 scope.go:117] "RemoveContainer" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176069 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} err="failed to get container status \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": rpc error: code = NotFound desc = could not find container \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": container with ID starting with 614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176094 4841 scope.go:117] "RemoveContainer" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176367 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} err="failed to get container status \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": rpc error: code = NotFound desc = could not find container \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": container with ID starting with 98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176390 4841 scope.go:117] "RemoveContainer" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176671 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} err="failed to get container status \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": rpc error: code = NotFound desc = could not find container \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": container with ID starting with aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.176716 4841 scope.go:117] "RemoveContainer" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177065 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} err="failed to get container status \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": rpc error: code = NotFound desc = could not find container \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": container with ID starting with 3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177086 4841 scope.go:117] "RemoveContainer" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177465 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} err="failed to get container status \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": rpc error: code = NotFound desc = could not find container \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": container with ID starting with 1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177491 4841 scope.go:117] "RemoveContainer" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177846 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} err="failed to get container status \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": rpc error: code = NotFound desc = could not find container \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": container with ID starting with 2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.177873 4841 scope.go:117] "RemoveContainer" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178114 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} err="failed to get container status \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": rpc error: code = NotFound desc = could not find container \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": container with ID starting with 1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178137 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178413 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} err="failed to get container status \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178440 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178717 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} err="failed to get container status \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": rpc error: code = NotFound desc = could not find container \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": container with ID starting with 459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.178754 4841 scope.go:117] "RemoveContainer" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179026 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} err="failed to get container status \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": rpc error: code = NotFound desc = could not find container \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": container with ID starting with c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179050 4841 scope.go:117] "RemoveContainer" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179274 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} err="failed to get container status \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": rpc error: code = NotFound desc = could not find container \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": container with ID starting with 614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179298 4841 scope.go:117] "RemoveContainer" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179532 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} err="failed to get container status \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": rpc error: code = NotFound desc = could not find container \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": container with ID starting with 98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179560 4841 scope.go:117] "RemoveContainer" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179855 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} err="failed to get container status \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": rpc error: code = NotFound desc = could not find container \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": container with ID starting with aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.179886 4841 scope.go:117] "RemoveContainer" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180135 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} err="failed to get container status \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": rpc error: code = NotFound desc = could not find container \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": container with ID starting with 3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180156 4841 scope.go:117] "RemoveContainer" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180432 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} err="failed to get container status \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": rpc error: code = NotFound desc = could not find container \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": container with ID starting with 1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180453 4841 scope.go:117] "RemoveContainer" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180716 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} err="failed to get container status \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": rpc error: code = NotFound desc = could not find container \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": container with ID starting with 2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.180744 4841 scope.go:117] "RemoveContainer" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181123 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} err="failed to get container status \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": rpc error: code = NotFound desc = could not find container \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": container with ID starting with 1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181149 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181443 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} err="failed to get container status \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181467 4841 scope.go:117] "RemoveContainer" containerID="459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181738 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf"} err="failed to get container status \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": rpc error: code = NotFound desc = could not find container \"459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf\": container with ID starting with 459434e6ce2aa574c8757d1b2fbb35204ed0cf5580b4b30dc4f6cbd54f5e63cf not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.181778 4841 scope.go:117] "RemoveContainer" containerID="c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182139 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c"} err="failed to get container status \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": rpc error: code = NotFound desc = could not find container \"c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c\": container with ID starting with c14dfeec7b4cb0cc31747e2e9a88407ffb443000f2a1421877be8d70ca868b7c not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182171 4841 scope.go:117] "RemoveContainer" containerID="614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182437 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28"} err="failed to get container status \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": rpc error: code = NotFound desc = could not find container \"614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28\": container with ID starting with 614a1b1246baa6796092c998da0499eea51a2ae7ce941d6bd47f4706048f1e28 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182464 4841 scope.go:117] "RemoveContainer" containerID="98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182705 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890"} err="failed to get container status \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": rpc error: code = NotFound desc = could not find container \"98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890\": container with ID starting with 98c574fa1ace7ac2e804c25d05b1e0d76a1879e4d997b63b3787927588632890 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.182733 4841 scope.go:117] "RemoveContainer" containerID="aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183006 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54"} err="failed to get container status \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": rpc error: code = NotFound desc = could not find container \"aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54\": container with ID starting with aa1cfb2de7588c8f9e0942ca85dcf90744e442e8aa1d83927707c22aaf0f5b54 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183031 4841 scope.go:117] "RemoveContainer" containerID="3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183233 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454"} err="failed to get container status \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": rpc error: code = NotFound desc = could not find container \"3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454\": container with ID starting with 3ac11e23c14fb76b3af7ee791891fb475d2d19ff982b1814cba668b7e0b0f454 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183263 4841 scope.go:117] "RemoveContainer" containerID="1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183463 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899"} err="failed to get container status \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": rpc error: code = NotFound desc = could not find container \"1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899\": container with ID starting with 1fb6aec9b17f1ca0312fb098dc03b790a579832037f912ef99896bc684074899 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183489 4841 scope.go:117] "RemoveContainer" containerID="2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183728 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387"} err="failed to get container status \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": rpc error: code = NotFound desc = could not find container \"2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387\": container with ID starting with 2de95134f4c720121b8c39a3676afcc31a7228126b5422f65f5ae2526bd93387 not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.183773 4841 scope.go:117] "RemoveContainer" containerID="1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.184053 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a"} err="failed to get container status \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": rpc error: code = NotFound desc = could not find container \"1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a\": container with ID starting with 1aa5620e301cacb68cdec5bf052ccffb22a53099fafca60a042f68dd85a64b6a not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.184080 4841 scope.go:117] "RemoveContainer" containerID="700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.184297 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed"} err="failed to get container status \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": rpc error: code = NotFound desc = could not find container \"700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed\": container with ID starting with 700257a1afafaac87bb617558946651c52aaf5743cbfb7db12ebea4103aa85ed not found: ID does not exist" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.624984 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56a9daa-a941-4d89-abd0-b7f0472ee869" path="/var/lib/kubelet/pods/c56a9daa-a941-4d89-abd0-b7f0472ee869/volumes" Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.988151 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerDied","Data":"46243dac2f82368ab20e83906183409e587e493c790977f90178d6d489c11c88"} Dec 04 09:29:43 crc kubenswrapper[4841]: I1204 09:29:43.988202 4841 generic.go:334] "Generic (PLEG): container finished" podID="a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a" containerID="46243dac2f82368ab20e83906183409e587e493c790977f90178d6d489c11c88" exitCode=0 Dec 04 09:29:45 crc kubenswrapper[4841]: I1204 09:29:45.005933 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"d57ba3f0ec7365aaf37ef90b765b11a65aeccd657c9b851ab440e82b6a22ac03"} Dec 04 09:29:45 crc kubenswrapper[4841]: I1204 09:29:45.006224 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"0283cae2da28645a29c90ecc8413be4301b1dd16d24d3b163cf23a17eed4ef33"} Dec 04 09:29:46 crc kubenswrapper[4841]: I1204 09:29:46.017785 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"18375fc818253c10dd1fa792a76c8ccd77c5ea8e20acf1d60543bd92391e5883"} Dec 04 09:29:46 crc kubenswrapper[4841]: I1204 09:29:46.018093 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"ea2795763fad387703495c878ebd94ab6fcc2e5295d938641ae61f4d1e2b109f"} Dec 04 09:29:46 crc kubenswrapper[4841]: I1204 09:29:46.018104 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"ebd8287cb1924a30e0d836236cd16ddf3e6bdc1df83fb58ba47cc07149969f63"} Dec 04 09:29:47 crc kubenswrapper[4841]: I1204 09:29:47.027670 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"2cbadc9f6552af4a21ca5f10c78dfd18d13f400f89e212e1cc8cb803849fdd23"} Dec 04 09:29:49 crc kubenswrapper[4841]: I1204 09:29:49.049026 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"f92db04b14b81a15a0d1f2f97d5bd04a4db93feaee6532ad8c254405e630311d"} Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.061107 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" event={"ID":"a07a9d53-5b2e-4d9a-9a72-344a77f2ae0a","Type":"ContainerStarted","Data":"0a1e71fec203e671a9c6428df743a8f3c37d3c3b71200c8cb8964375fead8959"} Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.061436 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.061453 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.061463 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.087504 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.092556 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:29:51 crc kubenswrapper[4841]: I1204 09:29:51.093104 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" podStartSLOduration=9.093093329 podStartE2EDuration="9.093093329s" podCreationTimestamp="2025-12-04 09:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:29:51.091479598 +0000 UTC m=+657.843269812" watchObservedRunningTime="2025-12-04 09:29:51.093093329 +0000 UTC m=+657.844883533" Dec 04 09:29:53 crc kubenswrapper[4841]: I1204 09:29:53.867322 4841 scope.go:117] "RemoveContainer" containerID="d36e160f17cdf2c3354d040de02f6d79d90d1b1336a561638869bddad4711c66" Dec 04 09:29:55 crc kubenswrapper[4841]: I1204 09:29:55.090243 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/2.log" Dec 04 09:29:57 crc kubenswrapper[4841]: I1204 09:29:57.617235 4841 scope.go:117] "RemoveContainer" containerID="f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db" Dec 04 09:29:57 crc kubenswrapper[4841]: E1204 09:29:57.617880 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-76xdk_openshift-multus(86bfe6c3-d06e-40b1-9801-74abeb07ae15)\"" pod="openshift-multus/multus-76xdk" podUID="86bfe6c3-d06e-40b1-9801-74abeb07ae15" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.180682 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf"] Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.181977 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.189391 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.189655 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.189808 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf"] Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.381698 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtn55\" (UniqueName: \"kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.382046 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.382274 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.483018 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.483347 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtn55\" (UniqueName: \"kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.483448 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.483891 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.489580 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.499424 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtn55\" (UniqueName: \"kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55\") pod \"collect-profiles-29414010-wq7xf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: I1204 09:30:00.503905 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: E1204 09:30:00.524225 4841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(c79f61792c2771efdc9114f139a37b2f5dfe956bf1187342ac8813a7d830ced0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 09:30:00 crc kubenswrapper[4841]: E1204 09:30:00.524325 4841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(c79f61792c2771efdc9114f139a37b2f5dfe956bf1187342ac8813a7d830ced0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: E1204 09:30:00.524370 4841 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(c79f61792c2771efdc9114f139a37b2f5dfe956bf1187342ac8813a7d830ced0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:00 crc kubenswrapper[4841]: E1204 09:30:00.524438 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager(b8ad50ad-2c45-4fc3-9636-4809e15579cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager(b8ad50ad-2c45-4fc3-9636-4809e15579cf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(c79f61792c2771efdc9114f139a37b2f5dfe956bf1187342ac8813a7d830ced0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" podUID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" Dec 04 09:30:01 crc kubenswrapper[4841]: I1204 09:30:01.124165 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:01 crc kubenswrapper[4841]: I1204 09:30:01.124815 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:01 crc kubenswrapper[4841]: E1204 09:30:01.151391 4841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(ab90f89d012809162702f366621c0215559d9b03af5c24afad4f97979be6cadd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 09:30:01 crc kubenswrapper[4841]: E1204 09:30:01.151476 4841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(ab90f89d012809162702f366621c0215559d9b03af5c24afad4f97979be6cadd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:01 crc kubenswrapper[4841]: E1204 09:30:01.151502 4841 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(ab90f89d012809162702f366621c0215559d9b03af5c24afad4f97979be6cadd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:01 crc kubenswrapper[4841]: E1204 09:30:01.151565 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager(b8ad50ad-2c45-4fc3-9636-4809e15579cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager(b8ad50ad-2c45-4fc3-9636-4809e15579cf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29414010-wq7xf_openshift-operator-lifecycle-manager_b8ad50ad-2c45-4fc3-9636-4809e15579cf_0(ab90f89d012809162702f366621c0215559d9b03af5c24afad4f97979be6cadd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" podUID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" Dec 04 09:30:10 crc kubenswrapper[4841]: I1204 09:30:10.616964 4841 scope.go:117] "RemoveContainer" containerID="f90f2ef31e9ad848c4856dc45963b90decace9cebc9f0054264d6e30b2c584db" Dec 04 09:30:11 crc kubenswrapper[4841]: I1204 09:30:11.183663 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-76xdk_86bfe6c3-d06e-40b1-9801-74abeb07ae15/kube-multus/2.log" Dec 04 09:30:11 crc kubenswrapper[4841]: I1204 09:30:11.184272 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-76xdk" event={"ID":"86bfe6c3-d06e-40b1-9801-74abeb07ae15","Type":"ContainerStarted","Data":"930210c76551f9b2835c8098fbd95cbcf15a891294eae5917ba1f7945f9d64ce"} Dec 04 09:30:12 crc kubenswrapper[4841]: I1204 09:30:12.858088 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4vldw" Dec 04 09:30:14 crc kubenswrapper[4841]: I1204 09:30:14.616661 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:14 crc kubenswrapper[4841]: I1204 09:30:14.618125 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:14 crc kubenswrapper[4841]: I1204 09:30:14.873230 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf"] Dec 04 09:30:15 crc kubenswrapper[4841]: I1204 09:30:15.210335 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" event={"ID":"b8ad50ad-2c45-4fc3-9636-4809e15579cf","Type":"ContainerStarted","Data":"db8c4e1f132f06fd91884ee0fcdf6f95d0c98e06d9ad0cd92969f56f02d2537d"} Dec 04 09:30:16 crc kubenswrapper[4841]: I1204 09:30:16.217123 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" event={"ID":"b8ad50ad-2c45-4fc3-9636-4809e15579cf","Type":"ContainerStarted","Data":"b94b422b5b0a58dada7e03c9abba3aea61a817e2e56376805d5b64e2981766be"} Dec 04 09:30:16 crc kubenswrapper[4841]: I1204 09:30:16.232578 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" podStartSLOduration=16.232558884 podStartE2EDuration="16.232558884s" podCreationTimestamp="2025-12-04 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:30:16.228493881 +0000 UTC m=+682.980284105" watchObservedRunningTime="2025-12-04 09:30:16.232558884 +0000 UTC m=+682.984349088" Dec 04 09:30:19 crc kubenswrapper[4841]: I1204 09:30:19.234907 4841 generic.go:334] "Generic (PLEG): container finished" podID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" containerID="b94b422b5b0a58dada7e03c9abba3aea61a817e2e56376805d5b64e2981766be" exitCode=0 Dec 04 09:30:19 crc kubenswrapper[4841]: I1204 09:30:19.235054 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" event={"ID":"b8ad50ad-2c45-4fc3-9636-4809e15579cf","Type":"ContainerDied","Data":"b94b422b5b0a58dada7e03c9abba3aea61a817e2e56376805d5b64e2981766be"} Dec 04 09:30:20 crc kubenswrapper[4841]: I1204 09:30:20.898549 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:20 crc kubenswrapper[4841]: I1204 09:30:20.954069 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume\") pod \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " Dec 04 09:30:20 crc kubenswrapper[4841]: I1204 09:30:20.955444 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8ad50ad-2c45-4fc3-9636-4809e15579cf" (UID: "b8ad50ad-2c45-4fc3-9636-4809e15579cf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.055227 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtn55\" (UniqueName: \"kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55\") pod \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.055737 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume\") pod \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\" (UID: \"b8ad50ad-2c45-4fc3-9636-4809e15579cf\") " Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.055964 4841 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ad50ad-2c45-4fc3-9636-4809e15579cf-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.062360 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8ad50ad-2c45-4fc3-9636-4809e15579cf" (UID: "b8ad50ad-2c45-4fc3-9636-4809e15579cf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.062446 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55" (OuterVolumeSpecName: "kube-api-access-dtn55") pod "b8ad50ad-2c45-4fc3-9636-4809e15579cf" (UID: "b8ad50ad-2c45-4fc3-9636-4809e15579cf"). InnerVolumeSpecName "kube-api-access-dtn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.157119 4841 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8ad50ad-2c45-4fc3-9636-4809e15579cf-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.157163 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtn55\" (UniqueName: \"kubernetes.io/projected/b8ad50ad-2c45-4fc3-9636-4809e15579cf-kube-api-access-dtn55\") on node \"crc\" DevicePath \"\"" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.250810 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" event={"ID":"b8ad50ad-2c45-4fc3-9636-4809e15579cf","Type":"ContainerDied","Data":"db8c4e1f132f06fd91884ee0fcdf6f95d0c98e06d9ad0cd92969f56f02d2537d"} Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.250886 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db8c4e1f132f06fd91884ee0fcdf6f95d0c98e06d9ad0cd92969f56f02d2537d" Dec 04 09:30:21 crc kubenswrapper[4841]: I1204 09:30:21.251029 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414010-wq7xf" Dec 04 09:30:59 crc kubenswrapper[4841]: I1204 09:30:59.013589 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:30:59 crc kubenswrapper[4841]: I1204 09:30:59.016314 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qn7mb" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="registry-server" containerID="cri-o://d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1" gracePeriod=30 Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.155460 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.300289 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content\") pod \"a5f6574a-c54e-4652-adca-674d077a3282\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.300367 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities\") pod \"a5f6574a-c54e-4652-adca-674d077a3282\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.301131 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6ft6\" (UniqueName: \"kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6\") pod \"a5f6574a-c54e-4652-adca-674d077a3282\" (UID: \"a5f6574a-c54e-4652-adca-674d077a3282\") " Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.301495 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities" (OuterVolumeSpecName: "utilities") pod "a5f6574a-c54e-4652-adca-674d077a3282" (UID: "a5f6574a-c54e-4652-adca-674d077a3282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.301679 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.306964 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6" (OuterVolumeSpecName: "kube-api-access-g6ft6") pod "a5f6574a-c54e-4652-adca-674d077a3282" (UID: "a5f6574a-c54e-4652-adca-674d077a3282"). InnerVolumeSpecName "kube-api-access-g6ft6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.341232 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5f6574a-c54e-4652-adca-674d077a3282" (UID: "a5f6574a-c54e-4652-adca-674d077a3282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.402085 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6ft6\" (UniqueName: \"kubernetes.io/projected/a5f6574a-c54e-4652-adca-674d077a3282-kube-api-access-g6ft6\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.402118 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f6574a-c54e-4652-adca-674d077a3282-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.506007 4841 generic.go:334] "Generic (PLEG): container finished" podID="a5f6574a-c54e-4652-adca-674d077a3282" containerID="d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1" exitCode=0 Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.506072 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerDied","Data":"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1"} Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.506087 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qn7mb" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.506127 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qn7mb" event={"ID":"a5f6574a-c54e-4652-adca-674d077a3282","Type":"ContainerDied","Data":"8d3c6a778a1574e8c87a24ce798d2a3545875e2fec5dc95bfed21d018be5f825"} Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.506158 4841 scope.go:117] "RemoveContainer" containerID="d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.535994 4841 scope.go:117] "RemoveContainer" containerID="766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.537334 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.544959 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qn7mb"] Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.555395 4841 scope.go:117] "RemoveContainer" containerID="705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.586890 4841 scope.go:117] "RemoveContainer" containerID="d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1" Dec 04 09:31:00 crc kubenswrapper[4841]: E1204 09:31:00.587549 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1\": container with ID starting with d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1 not found: ID does not exist" containerID="d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.587575 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1"} err="failed to get container status \"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1\": rpc error: code = NotFound desc = could not find container \"d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1\": container with ID starting with d8e09a1483038c287291fa76073bb72ebcbf0933484661e6d34cf0c7a3e6b4e1 not found: ID does not exist" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.587597 4841 scope.go:117] "RemoveContainer" containerID="766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f" Dec 04 09:31:00 crc kubenswrapper[4841]: E1204 09:31:00.588512 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f\": container with ID starting with 766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f not found: ID does not exist" containerID="766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.588556 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f"} err="failed to get container status \"766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f\": rpc error: code = NotFound desc = could not find container \"766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f\": container with ID starting with 766873ebd6d47c7dac7db3e8b0669c79cef18946bb0e92921af0e0dc0ee3dd0f not found: ID does not exist" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.588585 4841 scope.go:117] "RemoveContainer" containerID="705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84" Dec 04 09:31:00 crc kubenswrapper[4841]: E1204 09:31:00.589226 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84\": container with ID starting with 705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84 not found: ID does not exist" containerID="705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84" Dec 04 09:31:00 crc kubenswrapper[4841]: I1204 09:31:00.589440 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84"} err="failed to get container status \"705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84\": rpc error: code = NotFound desc = could not find container \"705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84\": container with ID starting with 705498ab0d252b91ee66f05ae356d0a276a5f53497ca67bf07b3d5d9cf194e84 not found: ID does not exist" Dec 04 09:31:01 crc kubenswrapper[4841]: I1204 09:31:01.622934 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f6574a-c54e-4652-adca-674d077a3282" path="/var/lib/kubelet/pods/a5f6574a-c54e-4652-adca-674d077a3282/volumes" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.650696 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4"] Dec 04 09:31:02 crc kubenswrapper[4841]: E1204 09:31:02.650913 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" containerName="collect-profiles" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.650925 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" containerName="collect-profiles" Dec 04 09:31:02 crc kubenswrapper[4841]: E1204 09:31:02.650941 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="extract-utilities" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.650947 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="extract-utilities" Dec 04 09:31:02 crc kubenswrapper[4841]: E1204 09:31:02.650957 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="registry-server" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.650963 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="registry-server" Dec 04 09:31:02 crc kubenswrapper[4841]: E1204 09:31:02.650975 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="extract-content" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.650981 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="extract-content" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.651088 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f6574a-c54e-4652-adca-674d077a3282" containerName="registry-server" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.651098 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ad50ad-2c45-4fc3-9636-4809e15579cf" containerName="collect-profiles" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.651845 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.656734 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.675701 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4"] Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.850558 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.850624 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ckg\" (UniqueName: \"kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.850701 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.951923 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.952513 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.952950 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ckg\" (UniqueName: \"kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.953160 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.954111 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:02 crc kubenswrapper[4841]: I1204 09:31:02.992875 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ckg\" (UniqueName: \"kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:03 crc kubenswrapper[4841]: I1204 09:31:03.270391 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:03 crc kubenswrapper[4841]: I1204 09:31:03.471737 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4"] Dec 04 09:31:03 crc kubenswrapper[4841]: I1204 09:31:03.529243 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerStarted","Data":"571e47c7705100f4c18fc346f8faab42a241a080ee6144145103933f2a7f0a12"} Dec 04 09:31:04 crc kubenswrapper[4841]: I1204 09:31:04.536856 4841 generic.go:334] "Generic (PLEG): container finished" podID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerID="a87a5711d383a384e2d229e8f969111ddd02d1d0cc7664484b11200db7af82dd" exitCode=0 Dec 04 09:31:04 crc kubenswrapper[4841]: I1204 09:31:04.536899 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerDied","Data":"a87a5711d383a384e2d229e8f969111ddd02d1d0cc7664484b11200db7af82dd"} Dec 04 09:31:04 crc kubenswrapper[4841]: I1204 09:31:04.549850 4841 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 09:31:06 crc kubenswrapper[4841]: I1204 09:31:06.553291 4841 generic.go:334] "Generic (PLEG): container finished" podID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerID="a42d00634eaced4d38125d89410cb23066bf65ac7d5e8c337fa3164831cb62f5" exitCode=0 Dec 04 09:31:06 crc kubenswrapper[4841]: I1204 09:31:06.553371 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerDied","Data":"a42d00634eaced4d38125d89410cb23066bf65ac7d5e8c337fa3164831cb62f5"} Dec 04 09:31:07 crc kubenswrapper[4841]: I1204 09:31:07.563635 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerStarted","Data":"5929867e928d06db882776caf4b5f1f506571cd0a46ce7798ddad8879bbf918e"} Dec 04 09:31:07 crc kubenswrapper[4841]: I1204 09:31:07.603513 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" podStartSLOduration=4.6317797899999995 podStartE2EDuration="5.603486754s" podCreationTimestamp="2025-12-04 09:31:02 +0000 UTC" firstStartedPulling="2025-12-04 09:31:04.549491111 +0000 UTC m=+731.301281315" lastFinishedPulling="2025-12-04 09:31:05.521198035 +0000 UTC m=+732.272988279" observedRunningTime="2025-12-04 09:31:07.59690032 +0000 UTC m=+734.348690574" watchObservedRunningTime="2025-12-04 09:31:07.603486754 +0000 UTC m=+734.355276998" Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.609261 4841 generic.go:334] "Generic (PLEG): container finished" podID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerID="5929867e928d06db882776caf4b5f1f506571cd0a46ce7798ddad8879bbf918e" exitCode=0 Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.609366 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerDied","Data":"5929867e928d06db882776caf4b5f1f506571cd0a46ce7798ddad8879bbf918e"} Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.855173 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l"] Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.856971 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.886564 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l"] Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.934166 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.934251 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssp9\" (UniqueName: \"kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:08 crc kubenswrapper[4841]: I1204 09:31:08.934308 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.035505 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.035554 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sssp9\" (UniqueName: \"kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.035584 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.036182 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.036467 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.073038 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sssp9\" (UniqueName: \"kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.187891 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:09 crc kubenswrapper[4841]: W1204 09:31:09.451571 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c9547ae_2030_4a77_a58d_e3a54a430e4f.slice/crio-a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e WatchSource:0}: Error finding container a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e: Status 404 returned error can't find the container with id a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.455670 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l"] Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.626748 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" event={"ID":"0c9547ae-2030-4a77-a58d-e3a54a430e4f","Type":"ContainerStarted","Data":"a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e"} Dec 04 09:31:09 crc kubenswrapper[4841]: I1204 09:31:09.909027 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.059998 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8ckg\" (UniqueName: \"kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg\") pod \"16c8917f-1adc-4ed5-bc5d-465d125693a9\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.060119 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle\") pod \"16c8917f-1adc-4ed5-bc5d-465d125693a9\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.060183 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util\") pod \"16c8917f-1adc-4ed5-bc5d-465d125693a9\" (UID: \"16c8917f-1adc-4ed5-bc5d-465d125693a9\") " Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.063328 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle" (OuterVolumeSpecName: "bundle") pod "16c8917f-1adc-4ed5-bc5d-465d125693a9" (UID: "16c8917f-1adc-4ed5-bc5d-465d125693a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.066610 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg" (OuterVolumeSpecName: "kube-api-access-k8ckg") pod "16c8917f-1adc-4ed5-bc5d-465d125693a9" (UID: "16c8917f-1adc-4ed5-bc5d-465d125693a9"). InnerVolumeSpecName "kube-api-access-k8ckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.084396 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util" (OuterVolumeSpecName: "util") pod "16c8917f-1adc-4ed5-bc5d-465d125693a9" (UID: "16c8917f-1adc-4ed5-bc5d-465d125693a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.161579 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.161634 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8ckg\" (UniqueName: \"kubernetes.io/projected/16c8917f-1adc-4ed5-bc5d-465d125693a9-kube-api-access-k8ckg\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.161656 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16c8917f-1adc-4ed5-bc5d-465d125693a9-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.625700 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" event={"ID":"16c8917f-1adc-4ed5-bc5d-465d125693a9","Type":"ContainerDied","Data":"571e47c7705100f4c18fc346f8faab42a241a080ee6144145103933f2a7f0a12"} Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.625790 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571e47c7705100f4c18fc346f8faab42a241a080ee6144145103933f2a7f0a12" Dec 04 09:31:10 crc kubenswrapper[4841]: I1204 09:31:10.625842 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4" Dec 04 09:31:11 crc kubenswrapper[4841]: I1204 09:31:11.631227 4841 generic.go:334] "Generic (PLEG): container finished" podID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerID="d23dd5bd773ed88db1ca30a10bd61d7a79de4bccccfaac253f893df53f3c6ed0" exitCode=0 Dec 04 09:31:11 crc kubenswrapper[4841]: I1204 09:31:11.631262 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" event={"ID":"0c9547ae-2030-4a77-a58d-e3a54a430e4f","Type":"ContainerDied","Data":"d23dd5bd773ed88db1ca30a10bd61d7a79de4bccccfaac253f893df53f3c6ed0"} Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.051923 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6"] Dec 04 09:31:13 crc kubenswrapper[4841]: E1204 09:31:13.052602 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="util" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.052614 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="util" Dec 04 09:31:13 crc kubenswrapper[4841]: E1204 09:31:13.052624 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="extract" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.052630 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="extract" Dec 04 09:31:13 crc kubenswrapper[4841]: E1204 09:31:13.052643 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="pull" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.052649 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="pull" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.052738 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c8917f-1adc-4ed5-bc5d-465d125693a9" containerName="extract" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.053522 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.057816 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6"] Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.202198 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.202266 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.202479 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlnt\" (UniqueName: \"kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.303311 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlnt\" (UniqueName: \"kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.303380 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.303410 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.304001 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.304057 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.366122 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlnt\" (UniqueName: \"kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.373068 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.642452 4841 generic.go:334] "Generic (PLEG): container finished" podID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerID="2bae8dd88efc18f728b152597ff2b738b3b3dc8b8f185ff1c815f0ff1e2fe643" exitCode=0 Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.642564 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" event={"ID":"0c9547ae-2030-4a77-a58d-e3a54a430e4f","Type":"ContainerDied","Data":"2bae8dd88efc18f728b152597ff2b738b3b3dc8b8f185ff1c815f0ff1e2fe643"} Dec 04 09:31:13 crc kubenswrapper[4841]: I1204 09:31:13.735304 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6"] Dec 04 09:31:14 crc kubenswrapper[4841]: I1204 09:31:14.649403 4841 generic.go:334] "Generic (PLEG): container finished" podID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerID="5d01b9cfe2b50d0ab83f6888a1f82aed50e6c2b93249c518ba1647101c229e99" exitCode=0 Dec 04 09:31:14 crc kubenswrapper[4841]: I1204 09:31:14.649441 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerDied","Data":"5d01b9cfe2b50d0ab83f6888a1f82aed50e6c2b93249c518ba1647101c229e99"} Dec 04 09:31:14 crc kubenswrapper[4841]: I1204 09:31:14.649860 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerStarted","Data":"ca624a1fb8624cc63a0093e592519e6c2cd872e6f09b72f6a1e2f5304e4bae53"} Dec 04 09:31:14 crc kubenswrapper[4841]: I1204 09:31:14.652011 4841 generic.go:334] "Generic (PLEG): container finished" podID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerID="446bc9ab2b5c834a996cf30a546b87387385c4ec9347639eb77affbd141bfbdf" exitCode=0 Dec 04 09:31:14 crc kubenswrapper[4841]: I1204 09:31:14.652045 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" event={"ID":"0c9547ae-2030-4a77-a58d-e3a54a430e4f","Type":"ContainerDied","Data":"446bc9ab2b5c834a996cf30a546b87387385c4ec9347639eb77affbd141bfbdf"} Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.062832 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.239973 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util\") pod \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.240049 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sssp9\" (UniqueName: \"kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9\") pod \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.240081 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle\") pod \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\" (UID: \"0c9547ae-2030-4a77-a58d-e3a54a430e4f\") " Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.259148 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9" (OuterVolumeSpecName: "kube-api-access-sssp9") pod "0c9547ae-2030-4a77-a58d-e3a54a430e4f" (UID: "0c9547ae-2030-4a77-a58d-e3a54a430e4f"). InnerVolumeSpecName "kube-api-access-sssp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.259277 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle" (OuterVolumeSpecName: "bundle") pod "0c9547ae-2030-4a77-a58d-e3a54a430e4f" (UID: "0c9547ae-2030-4a77-a58d-e3a54a430e4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.341559 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sssp9\" (UniqueName: \"kubernetes.io/projected/0c9547ae-2030-4a77-a58d-e3a54a430e4f-kube-api-access-sssp9\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.341590 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.463690 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util" (OuterVolumeSpecName: "util") pod "0c9547ae-2030-4a77-a58d-e3a54a430e4f" (UID: "0c9547ae-2030-4a77-a58d-e3a54a430e4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.545028 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c9547ae-2030-4a77-a58d-e3a54a430e4f-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.665879 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" event={"ID":"0c9547ae-2030-4a77-a58d-e3a54a430e4f","Type":"ContainerDied","Data":"a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e"} Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.665913 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a22267ece6cb51663b1746649a7401717aafa15cb7be7f9b86ff0de6129c637e" Dec 04 09:31:16 crc kubenswrapper[4841]: I1204 09:31:16.665913 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.138859 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br"] Dec 04 09:31:19 crc kubenswrapper[4841]: E1204 09:31:19.139792 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="extract" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.139855 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="extract" Dec 04 09:31:19 crc kubenswrapper[4841]: E1204 09:31:19.139903 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="pull" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.139948 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="pull" Dec 04 09:31:19 crc kubenswrapper[4841]: E1204 09:31:19.139998 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="util" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.140042 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="util" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.140186 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9547ae-2030-4a77-a58d-e3a54a430e4f" containerName="extract" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.140574 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.143231 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ncz75" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.143243 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.143461 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.151829 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.256636 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.257528 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.259452 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2jl8z" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.259675 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.270890 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.277463 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nv2\" (UniqueName: \"kubernetes.io/projected/5c329c80-8a48-46ed-951c-3eea7069ea2d-kube-api-access-c9nv2\") pod \"obo-prometheus-operator-668cf9dfbb-9w7br\" (UID: \"5c329c80-8a48-46ed-951c-3eea7069ea2d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.277647 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.277694 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.285677 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.286687 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.307307 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.378688 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.378724 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.378758 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.378803 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nv2\" (UniqueName: \"kubernetes.io/projected/5c329c80-8a48-46ed-951c-3eea7069ea2d-kube-api-access-c9nv2\") pod \"obo-prometheus-operator-668cf9dfbb-9w7br\" (UID: \"5c329c80-8a48-46ed-951c-3eea7069ea2d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.378833 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.382852 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.401205 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/328f71ee-61d3-45fa-b31b-920b60b829da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w\" (UID: \"328f71ee-61d3-45fa-b31b-920b60b829da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.401590 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nv2\" (UniqueName: \"kubernetes.io/projected/5c329c80-8a48-46ed-951c-3eea7069ea2d-kube-api-access-c9nv2\") pod \"obo-prometheus-operator-668cf9dfbb-9w7br\" (UID: \"5c329c80-8a48-46ed-951c-3eea7069ea2d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.457412 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.467792 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6fwx2"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.468662 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.470342 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.470392 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6qh89" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.481654 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.481788 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52522\" (UniqueName: \"kubernetes.io/projected/17fa5a89-d2a0-4e18-a108-3d65165edf2c-kube-api-access-52522\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.481831 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.481847 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17fa5a89-d2a0-4e18-a108-3d65165edf2c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.484066 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.487252 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6fwx2"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.498251 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41b62915-03e0-4a31-ad0e-1ae076c46c5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9\" (UID: \"41b62915-03e0-4a31-ad0e-1ae076c46c5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.572800 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.586019 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52522\" (UniqueName: \"kubernetes.io/projected/17fa5a89-d2a0-4e18-a108-3d65165edf2c-kube-api-access-52522\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.586089 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17fa5a89-d2a0-4e18-a108-3d65165edf2c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.589347 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17fa5a89-d2a0-4e18-a108-3d65165edf2c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.602477 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52522\" (UniqueName: \"kubernetes.io/projected/17fa5a89-d2a0-4e18-a108-3d65165edf2c-kube-api-access-52522\") pod \"observability-operator-d8bb48f5d-6fwx2\" (UID: \"17fa5a89-d2a0-4e18-a108-3d65165edf2c\") " pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.604972 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.657346 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-5hpbz"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.658027 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.659622 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5pvtm" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.676665 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-5hpbz"] Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.687081 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lld\" (UniqueName: \"kubernetes.io/projected/503a4298-2b62-4e18-a4be-637dc8b9ffeb-kube-api-access-w6lld\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.687169 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/503a4298-2b62-4e18-a4be-637dc8b9ffeb-openshift-service-ca\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.787208 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.787710 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/503a4298-2b62-4e18-a4be-637dc8b9ffeb-openshift-service-ca\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.787813 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lld\" (UniqueName: \"kubernetes.io/projected/503a4298-2b62-4e18-a4be-637dc8b9ffeb-kube-api-access-w6lld\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.788783 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/503a4298-2b62-4e18-a4be-637dc8b9ffeb-openshift-service-ca\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.818578 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lld\" (UniqueName: \"kubernetes.io/projected/503a4298-2b62-4e18-a4be-637dc8b9ffeb-kube-api-access-w6lld\") pod \"perses-operator-5446b9c989-5hpbz\" (UID: \"503a4298-2b62-4e18-a4be-637dc8b9ffeb\") " pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:19 crc kubenswrapper[4841]: I1204 09:31:19.976347 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.497556 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.497820 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.557647 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w"] Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.562530 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br"] Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.666007 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9"] Dec 04 09:31:20 crc kubenswrapper[4841]: W1204 09:31:20.670083 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41b62915_03e0_4a31_ad0e_1ae076c46c5d.slice/crio-a9a9d484b145be893f78757500b5ec0c01756e709239adecf145c6757e0dc4b6 WatchSource:0}: Error finding container a9a9d484b145be893f78757500b5ec0c01756e709239adecf145c6757e0dc4b6: Status 404 returned error can't find the container with id a9a9d484b145be893f78757500b5ec0c01756e709239adecf145c6757e0dc4b6 Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.686041 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" event={"ID":"5c329c80-8a48-46ed-951c-3eea7069ea2d","Type":"ContainerStarted","Data":"a6cb855c342fc7aa401d693ac65612445aed273ec37886fb424deb1fe2bbe768"} Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.686790 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" event={"ID":"41b62915-03e0-4a31-ad0e-1ae076c46c5d","Type":"ContainerStarted","Data":"a9a9d484b145be893f78757500b5ec0c01756e709239adecf145c6757e0dc4b6"} Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.687900 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" event={"ID":"328f71ee-61d3-45fa-b31b-920b60b829da","Type":"ContainerStarted","Data":"6112741caf5af7dd615c39b942b01706db66a8242b83fd7661397c624330e975"} Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.689348 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerStarted","Data":"7037c1f8c0f41e24653b5d10f1440d4709aae1a876f136eb29600f44f487fc16"} Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.726111 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6fwx2"] Dec 04 09:31:20 crc kubenswrapper[4841]: W1204 09:31:20.731105 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17fa5a89_d2a0_4e18_a108_3d65165edf2c.slice/crio-035c80455af600af7ec16d5b61988031b1f3c3880e7ceba8fbb654b441d69a52 WatchSource:0}: Error finding container 035c80455af600af7ec16d5b61988031b1f3c3880e7ceba8fbb654b441d69a52: Status 404 returned error can't find the container with id 035c80455af600af7ec16d5b61988031b1f3c3880e7ceba8fbb654b441d69a52 Dec 04 09:31:20 crc kubenswrapper[4841]: I1204 09:31:20.732471 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-5hpbz"] Dec 04 09:31:20 crc kubenswrapper[4841]: W1204 09:31:20.743908 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod503a4298_2b62_4e18_a4be_637dc8b9ffeb.slice/crio-23c3feb704bf6065c94deed0d71b91a653a3ae6d63db96d499f645349700c565 WatchSource:0}: Error finding container 23c3feb704bf6065c94deed0d71b91a653a3ae6d63db96d499f645349700c565: Status 404 returned error can't find the container with id 23c3feb704bf6065c94deed0d71b91a653a3ae6d63db96d499f645349700c565 Dec 04 09:31:21 crc kubenswrapper[4841]: I1204 09:31:21.695192 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" event={"ID":"17fa5a89-d2a0-4e18-a108-3d65165edf2c","Type":"ContainerStarted","Data":"035c80455af600af7ec16d5b61988031b1f3c3880e7ceba8fbb654b441d69a52"} Dec 04 09:31:21 crc kubenswrapper[4841]: I1204 09:31:21.697279 4841 generic.go:334] "Generic (PLEG): container finished" podID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerID="7037c1f8c0f41e24653b5d10f1440d4709aae1a876f136eb29600f44f487fc16" exitCode=0 Dec 04 09:31:21 crc kubenswrapper[4841]: I1204 09:31:21.697366 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerDied","Data":"7037c1f8c0f41e24653b5d10f1440d4709aae1a876f136eb29600f44f487fc16"} Dec 04 09:31:21 crc kubenswrapper[4841]: I1204 09:31:21.698660 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" event={"ID":"503a4298-2b62-4e18-a4be-637dc8b9ffeb","Type":"ContainerStarted","Data":"23c3feb704bf6065c94deed0d71b91a653a3ae6d63db96d499f645349700c565"} Dec 04 09:31:22 crc kubenswrapper[4841]: I1204 09:31:22.740754 4841 generic.go:334] "Generic (PLEG): container finished" podID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerID="b2dca9c6a759c06f42416f4d103e836586cad65db510d8b70139a4ec93e77c6c" exitCode=0 Dec 04 09:31:22 crc kubenswrapper[4841]: I1204 09:31:22.740953 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerDied","Data":"b2dca9c6a759c06f42416f4d103e836586cad65db510d8b70139a4ec93e77c6c"} Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.113489 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.259837 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util\") pod \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.259985 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle\") pod \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.260029 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlnt\" (UniqueName: \"kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt\") pod \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\" (UID: \"f85e6ab8-4e46-4b30-b425-1f812e4faabc\") " Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.261400 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle" (OuterVolumeSpecName: "bundle") pod "f85e6ab8-4e46-4b30-b425-1f812e4faabc" (UID: "f85e6ab8-4e46-4b30-b425-1f812e4faabc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.265654 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt" (OuterVolumeSpecName: "kube-api-access-dxlnt") pod "f85e6ab8-4e46-4b30-b425-1f812e4faabc" (UID: "f85e6ab8-4e46-4b30-b425-1f812e4faabc"). InnerVolumeSpecName "kube-api-access-dxlnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.271505 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util" (OuterVolumeSpecName: "util") pod "f85e6ab8-4e46-4b30-b425-1f812e4faabc" (UID: "f85e6ab8-4e46-4b30-b425-1f812e4faabc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.361317 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.361348 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlnt\" (UniqueName: \"kubernetes.io/projected/f85e6ab8-4e46-4b30-b425-1f812e4faabc-kube-api-access-dxlnt\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.361358 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f85e6ab8-4e46-4b30-b425-1f812e4faabc-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.789448 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" event={"ID":"f85e6ab8-4e46-4b30-b425-1f812e4faabc","Type":"ContainerDied","Data":"ca624a1fb8624cc63a0093e592519e6c2cd872e6f09b72f6a1e2f5304e4bae53"} Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.789775 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca624a1fb8624cc63a0093e592519e6c2cd872e6f09b72f6a1e2f5304e4bae53" Dec 04 09:31:24 crc kubenswrapper[4841]: I1204 09:31:24.789503 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.119556 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-f76f68cc9-gbxkq"] Dec 04 09:31:26 crc kubenswrapper[4841]: E1204 09:31:26.119807 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="util" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.119823 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="util" Dec 04 09:31:26 crc kubenswrapper[4841]: E1204 09:31:26.119835 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="pull" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.119842 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="pull" Dec 04 09:31:26 crc kubenswrapper[4841]: E1204 09:31:26.119853 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="extract" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.119861 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="extract" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.119965 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e6ab8-4e46-4b30-b425-1f812e4faabc" containerName="extract" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.120325 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.124084 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.124360 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.124535 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.124724 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-vcbsh" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.127687 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f76f68cc9-gbxkq"] Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.287064 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-webhook-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.287133 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-apiservice-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.287168 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zxb\" (UniqueName: \"kubernetes.io/projected/70f8b73c-d182-4121-b00f-5b96fca648e2-kube-api-access-48zxb\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.388466 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-webhook-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.388531 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-apiservice-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.388561 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zxb\" (UniqueName: \"kubernetes.io/projected/70f8b73c-d182-4121-b00f-5b96fca648e2-kube-api-access-48zxb\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.394812 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-apiservice-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.397316 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70f8b73c-d182-4121-b00f-5b96fca648e2-webhook-cert\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.436474 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zxb\" (UniqueName: \"kubernetes.io/projected/70f8b73c-d182-4121-b00f-5b96fca648e2-kube-api-access-48zxb\") pod \"elastic-operator-f76f68cc9-gbxkq\" (UID: \"70f8b73c-d182-4121-b00f-5b96fca648e2\") " pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:26 crc kubenswrapper[4841]: I1204 09:31:26.445023 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" Dec 04 09:31:28 crc kubenswrapper[4841]: I1204 09:31:28.567407 4841 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.052609 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f76f68cc9-gbxkq"] Dec 04 09:31:40 crc kubenswrapper[4841]: W1204 09:31:40.058593 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f8b73c_d182_4121_b00f_5b96fca648e2.slice/crio-bc0608ddbe00c977d4f80600678b6b34879beab5e4f3d60ef6f2736ba355ee57 WatchSource:0}: Error finding container bc0608ddbe00c977d4f80600678b6b34879beab5e4f3d60ef6f2736ba355ee57: Status 404 returned error can't find the container with id bc0608ddbe00c977d4f80600678b6b34879beab5e4f3d60ef6f2736ba355ee57 Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.708388 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h"] Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.709050 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.711506 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.711720 4841 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9krnh" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.712195 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.763133 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h"] Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.796470 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c509486f-0036-451f-b7b6-a0d87c5aa0b4-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.796555 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdmt\" (UniqueName: \"kubernetes.io/projected/c509486f-0036-451f-b7b6-a0d87c5aa0b4-kube-api-access-dbdmt\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.883213 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" event={"ID":"5c329c80-8a48-46ed-951c-3eea7069ea2d","Type":"ContainerStarted","Data":"31a7a115ade68b9dbd507f478338905d5e9ea407448fbb9775c8c5859295cef0"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.886158 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" event={"ID":"17fa5a89-d2a0-4e18-a108-3d65165edf2c","Type":"ContainerStarted","Data":"8325807c99ba2770970e50ffe5009edbb06bfcb6970b347da32539381bd06d64"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.887720 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.888222 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.889394 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" event={"ID":"41b62915-03e0-4a31-ad0e-1ae076c46c5d","Type":"ContainerStarted","Data":"4b361b3aa9026ccdb0e22f268e2edb14cb0d7c6cda260f9bb675860963dfef40"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.890351 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" event={"ID":"70f8b73c-d182-4121-b00f-5b96fca648e2","Type":"ContainerStarted","Data":"bc0608ddbe00c977d4f80600678b6b34879beab5e4f3d60ef6f2736ba355ee57"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.891728 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" event={"ID":"328f71ee-61d3-45fa-b31b-920b60b829da","Type":"ContainerStarted","Data":"3f5da3d86c9eb0bf676dc4a77a2fb9c914bc2fb71710e74ceb1796d852012561"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.894591 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" event={"ID":"503a4298-2b62-4e18-a4be-637dc8b9ffeb","Type":"ContainerStarted","Data":"92c4c608692dc651e6406bcc1d12c8bc2709831fec98cfe8e7d28bc63cbed681"} Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.894714 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.897715 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c509486f-0036-451f-b7b6-a0d87c5aa0b4-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.897788 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbdmt\" (UniqueName: \"kubernetes.io/projected/c509486f-0036-451f-b7b6-a0d87c5aa0b4-kube-api-access-dbdmt\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.899186 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c509486f-0036-451f-b7b6-a0d87c5aa0b4-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.914389 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9w7br" podStartSLOduration=2.779583153 podStartE2EDuration="21.914363066s" podCreationTimestamp="2025-12-04 09:31:19 +0000 UTC" firstStartedPulling="2025-12-04 09:31:20.580455717 +0000 UTC m=+747.332245921" lastFinishedPulling="2025-12-04 09:31:39.71523563 +0000 UTC m=+766.467025834" observedRunningTime="2025-12-04 09:31:40.913633768 +0000 UTC m=+767.665423982" watchObservedRunningTime="2025-12-04 09:31:40.914363066 +0000 UTC m=+767.666153270" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.931641 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbdmt\" (UniqueName: \"kubernetes.io/projected/c509486f-0036-451f-b7b6-a0d87c5aa0b4-kube-api-access-dbdmt\") pod \"cert-manager-operator-controller-manager-5446d6888b-mld2h\" (UID: \"c509486f-0036-451f-b7b6-a0d87c5aa0b4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.946259 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9" podStartSLOduration=2.857025177 podStartE2EDuration="21.946244288s" podCreationTimestamp="2025-12-04 09:31:19 +0000 UTC" firstStartedPulling="2025-12-04 09:31:20.673349825 +0000 UTC m=+747.425140029" lastFinishedPulling="2025-12-04 09:31:39.762568936 +0000 UTC m=+766.514359140" observedRunningTime="2025-12-04 09:31:40.944642658 +0000 UTC m=+767.696432862" watchObservedRunningTime="2025-12-04 09:31:40.946244288 +0000 UTC m=+767.698034492" Dec 04 09:31:40 crc kubenswrapper[4841]: I1204 09:31:40.981545 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6fwx2" podStartSLOduration=2.929475369 podStartE2EDuration="21.981527056s" podCreationTimestamp="2025-12-04 09:31:19 +0000 UTC" firstStartedPulling="2025-12-04 09:31:20.733926101 +0000 UTC m=+747.485716305" lastFinishedPulling="2025-12-04 09:31:39.785977778 +0000 UTC m=+766.537767992" observedRunningTime="2025-12-04 09:31:40.9744716 +0000 UTC m=+767.726261804" watchObservedRunningTime="2025-12-04 09:31:40.981527056 +0000 UTC m=+767.733317260" Dec 04 09:31:41 crc kubenswrapper[4841]: I1204 09:31:41.024942 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" Dec 04 09:31:41 crc kubenswrapper[4841]: I1204 09:31:41.036420 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w" podStartSLOduration=2.895715691 podStartE2EDuration="22.03640409s" podCreationTimestamp="2025-12-04 09:31:19 +0000 UTC" firstStartedPulling="2025-12-04 09:31:20.574666883 +0000 UTC m=+747.326457087" lastFinishedPulling="2025-12-04 09:31:39.715355282 +0000 UTC m=+766.467145486" observedRunningTime="2025-12-04 09:31:41.03604325 +0000 UTC m=+767.787833464" watchObservedRunningTime="2025-12-04 09:31:41.03640409 +0000 UTC m=+767.788194294" Dec 04 09:31:41 crc kubenswrapper[4841]: I1204 09:31:41.037880 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" podStartSLOduration=3.069496468 podStartE2EDuration="22.037870826s" podCreationTimestamp="2025-12-04 09:31:19 +0000 UTC" firstStartedPulling="2025-12-04 09:31:20.746538654 +0000 UTC m=+747.498328858" lastFinishedPulling="2025-12-04 09:31:39.714913012 +0000 UTC m=+766.466703216" observedRunningTime="2025-12-04 09:31:41.009151562 +0000 UTC m=+767.760941766" watchObservedRunningTime="2025-12-04 09:31:41.037870826 +0000 UTC m=+767.789661030" Dec 04 09:31:41 crc kubenswrapper[4841]: I1204 09:31:41.301913 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h"] Dec 04 09:31:41 crc kubenswrapper[4841]: W1204 09:31:41.312454 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc509486f_0036_451f_b7b6_a0d87c5aa0b4.slice/crio-ab635a4e5b1f11b58b7a082c0ea8777a4476ae028267424043052c09ff16130b WatchSource:0}: Error finding container ab635a4e5b1f11b58b7a082c0ea8777a4476ae028267424043052c09ff16130b: Status 404 returned error can't find the container with id ab635a4e5b1f11b58b7a082c0ea8777a4476ae028267424043052c09ff16130b Dec 04 09:31:41 crc kubenswrapper[4841]: I1204 09:31:41.901925 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" event={"ID":"c509486f-0036-451f-b7b6-a0d87c5aa0b4","Type":"ContainerStarted","Data":"ab635a4e5b1f11b58b7a082c0ea8777a4476ae028267424043052c09ff16130b"} Dec 04 09:31:45 crc kubenswrapper[4841]: I1204 09:31:45.922283 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" event={"ID":"70f8b73c-d182-4121-b00f-5b96fca648e2","Type":"ContainerStarted","Data":"b4b70fdd507034d072ec3ad0010d47adc470db8acd3e3e79a2ab507115ea84c5"} Dec 04 09:31:45 crc kubenswrapper[4841]: I1204 09:31:45.924103 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" event={"ID":"c509486f-0036-451f-b7b6-a0d87c5aa0b4","Type":"ContainerStarted","Data":"38cecc07466a3b01bf0bf3972f4757112602228699bd4df38bafb63af2357093"} Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.009040 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-mld2h" podStartSLOduration=2.248808035 podStartE2EDuration="6.009025561s" podCreationTimestamp="2025-12-04 09:31:40 +0000 UTC" firstStartedPulling="2025-12-04 09:31:41.31605842 +0000 UTC m=+768.067848624" lastFinishedPulling="2025-12-04 09:31:45.076275946 +0000 UTC m=+771.828066150" observedRunningTime="2025-12-04 09:31:46.002075518 +0000 UTC m=+772.753865722" watchObservedRunningTime="2025-12-04 09:31:46.009025561 +0000 UTC m=+772.760815755" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.010353 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-f76f68cc9-gbxkq" podStartSLOduration=14.994314852 podStartE2EDuration="20.010350054s" podCreationTimestamp="2025-12-04 09:31:26 +0000 UTC" firstStartedPulling="2025-12-04 09:31:40.065227689 +0000 UTC m=+766.817017893" lastFinishedPulling="2025-12-04 09:31:45.081262881 +0000 UTC m=+771.833053095" observedRunningTime="2025-12-04 09:31:45.948130108 +0000 UTC m=+772.699920312" watchObservedRunningTime="2025-12-04 09:31:46.010350054 +0000 UTC m=+772.762140258" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.396823 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.398357 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.400240 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.400544 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-jktt7" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.401134 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.402334 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.402656 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.403301 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.403407 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.403992 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.404537 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.418137 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.487885 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.487941 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.487966 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.487988 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488013 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488049 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488180 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488238 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/447eaad4-e25d-4d71-a0e2-f720640f3ba2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488277 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488303 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488375 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488410 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488436 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488465 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.488508 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589232 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589287 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/447eaad4-e25d-4d71-a0e2-f720640f3ba2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589324 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589348 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589377 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589440 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589465 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589493 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589519 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589543 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589571 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589592 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589616 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589640 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589672 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.589992 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.590028 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.590205 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.590289 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.590309 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.590551 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.591067 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.591133 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.596967 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.597068 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.597527 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.598915 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.599697 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.601435 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/447eaad4-e25d-4d71-a0e2-f720640f3ba2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.606580 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/447eaad4-e25d-4d71-a0e2-f720640f3ba2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"447eaad4-e25d-4d71-a0e2-f720640f3ba2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.719409 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:31:46 crc kubenswrapper[4841]: I1204 09:31:46.955272 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 04 09:31:46 crc kubenswrapper[4841]: W1204 09:31:46.966979 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod447eaad4_e25d_4d71_a0e2_f720640f3ba2.slice/crio-a1bbc377bea29035390ebf731f0a0db0de6ddf4fc22e250224491d9661ff801b WatchSource:0}: Error finding container a1bbc377bea29035390ebf731f0a0db0de6ddf4fc22e250224491d9661ff801b: Status 404 returned error can't find the container with id a1bbc377bea29035390ebf731f0a0db0de6ddf4fc22e250224491d9661ff801b Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.410253 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.411364 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.424870 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.605446 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.605518 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7t5\" (UniqueName: \"kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.605549 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.706665 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7t5\" (UniqueName: \"kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.706743 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.706928 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.707352 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.707620 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.733524 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7t5\" (UniqueName: \"kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5\") pod \"certified-operators-bk9zp\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.738104 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:47 crc kubenswrapper[4841]: I1204 09:31:47.937578 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"447eaad4-e25d-4d71-a0e2-f720640f3ba2","Type":"ContainerStarted","Data":"a1bbc377bea29035390ebf731f0a0db0de6ddf4fc22e250224491d9661ff801b"} Dec 04 09:31:48 crc kubenswrapper[4841]: I1204 09:31:48.162654 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:31:48 crc kubenswrapper[4841]: W1204 09:31:48.198530 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd43601a_8eeb_4777_b1da_56e00fdd961e.slice/crio-a6261830891a0b08331e385bde350bab48d2097a62d690e267387112757a7152 WatchSource:0}: Error finding container a6261830891a0b08331e385bde350bab48d2097a62d690e267387112757a7152: Status 404 returned error can't find the container with id a6261830891a0b08331e385bde350bab48d2097a62d690e267387112757a7152 Dec 04 09:31:48 crc kubenswrapper[4841]: I1204 09:31:48.948971 4841 generic.go:334] "Generic (PLEG): container finished" podID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerID="b59b9a1321c2755c4658c797ac5e9cfccdb1e12afdc0477d0b139a90bd62ae07" exitCode=0 Dec 04 09:31:48 crc kubenswrapper[4841]: I1204 09:31:48.949139 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerDied","Data":"b59b9a1321c2755c4658c797ac5e9cfccdb1e12afdc0477d0b139a90bd62ae07"} Dec 04 09:31:48 crc kubenswrapper[4841]: I1204 09:31:48.949237 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerStarted","Data":"a6261830891a0b08331e385bde350bab48d2097a62d690e267387112757a7152"} Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.228568 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbdpk"] Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.229229 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.231202 4841 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ffgsk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.231294 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.231339 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.244930 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbdpk"] Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.335093 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.335171 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjtm\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-kube-api-access-zwjtm\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.436924 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.437036 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjtm\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-kube-api-access-zwjtm\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.463647 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjtm\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-kube-api-access-zwjtm\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.482370 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lbdpk\" (UID: \"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.547146 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:31:49 crc kubenswrapper[4841]: I1204 09:31:49.978927 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-5hpbz" Dec 04 09:31:50 crc kubenswrapper[4841]: I1204 09:31:50.497633 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:31:50 crc kubenswrapper[4841]: I1204 09:31:50.497701 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.723299 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nz48b"] Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.724059 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.726488 4841 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9mr7q" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.728457 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nz48b"] Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.873125 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.873202 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9v8\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-kube-api-access-fc9v8\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.974298 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:51 crc kubenswrapper[4841]: I1204 09:31:51.974588 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9v8\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-kube-api-access-fc9v8\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:52 crc kubenswrapper[4841]: I1204 09:31:52.005560 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:52 crc kubenswrapper[4841]: I1204 09:31:52.021249 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9v8\" (UniqueName: \"kubernetes.io/projected/f9ab64cd-10d3-4143-90da-d4fe6e5525ab-kube-api-access-fc9v8\") pod \"cert-manager-cainjector-855d9ccff4-nz48b\" (UID: \"f9ab64cd-10d3-4143-90da-d4fe6e5525ab\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:52 crc kubenswrapper[4841]: I1204 09:31:52.049737 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" Dec 04 09:31:52 crc kubenswrapper[4841]: I1204 09:31:52.424566 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lbdpk"] Dec 04 09:31:52 crc kubenswrapper[4841]: I1204 09:31:52.589242 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-nz48b"] Dec 04 09:31:53 crc kubenswrapper[4841]: I1204 09:31:53.031898 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerStarted","Data":"2020a2ec3667fea719a6a25f4e382802b69649310609401c2a55a68e84c0f550"} Dec 04 09:31:53 crc kubenswrapper[4841]: I1204 09:31:53.036622 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" event={"ID":"f9ab64cd-10d3-4143-90da-d4fe6e5525ab","Type":"ContainerStarted","Data":"63bcd257a938ad21fe573093454c98bb01899c9825fb4cc9e401cd133fdf1e75"} Dec 04 09:31:53 crc kubenswrapper[4841]: I1204 09:31:53.050507 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" event={"ID":"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae","Type":"ContainerStarted","Data":"8c57cbc8bd26c3f8345eba0abea42503e663140bd555023186b9857eba8edc2e"} Dec 04 09:31:54 crc kubenswrapper[4841]: I1204 09:31:54.061822 4841 generic.go:334] "Generic (PLEG): container finished" podID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerID="2020a2ec3667fea719a6a25f4e382802b69649310609401c2a55a68e84c0f550" exitCode=0 Dec 04 09:31:54 crc kubenswrapper[4841]: I1204 09:31:54.061972 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerDied","Data":"2020a2ec3667fea719a6a25f4e382802b69649310609401c2a55a68e84c0f550"} Dec 04 09:31:56 crc kubenswrapper[4841]: I1204 09:31:56.081331 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerStarted","Data":"82d0907d9e7a4092a3291a645c5f212871c8ba2be2559a8f82897962c95c9a82"} Dec 04 09:31:56 crc kubenswrapper[4841]: I1204 09:31:56.107953 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bk9zp" podStartSLOduration=3.017509397 podStartE2EDuration="9.107936935s" podCreationTimestamp="2025-12-04 09:31:47 +0000 UTC" firstStartedPulling="2025-12-04 09:31:48.950812934 +0000 UTC m=+775.702603128" lastFinishedPulling="2025-12-04 09:31:55.041240462 +0000 UTC m=+781.793030666" observedRunningTime="2025-12-04 09:31:56.105023634 +0000 UTC m=+782.856813858" watchObservedRunningTime="2025-12-04 09:31:56.107936935 +0000 UTC m=+782.859727139" Dec 04 09:31:57 crc kubenswrapper[4841]: I1204 09:31:57.738810 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:57 crc kubenswrapper[4841]: I1204 09:31:57.739031 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:31:57 crc kubenswrapper[4841]: I1204 09:31:57.827660 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.423881 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5hrcv"] Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.425850 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.429024 4841 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-plb2f" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.439244 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5hrcv"] Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.600552 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-bound-sa-token\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.601042 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hz8\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-kube-api-access-t9hz8\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.702681 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-bound-sa-token\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.703099 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hz8\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-kube-api-access-t9hz8\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.729371 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-bound-sa-token\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.729738 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hz8\" (UniqueName: \"kubernetes.io/projected/c3fa4e21-9119-47f0-a900-d0f08fd0fecf-kube-api-access-t9hz8\") pod \"cert-manager-86cb77c54b-5hrcv\" (UID: \"c3fa4e21-9119-47f0-a900-d0f08fd0fecf\") " pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.754347 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-5hrcv" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.784437 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:32:07 crc kubenswrapper[4841]: I1204 09:32:07.826442 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:32:08 crc kubenswrapper[4841]: I1204 09:32:08.167582 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bk9zp" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="registry-server" containerID="cri-o://82d0907d9e7a4092a3291a645c5f212871c8ba2be2559a8f82897962c95c9a82" gracePeriod=2 Dec 04 09:32:09 crc kubenswrapper[4841]: I1204 09:32:09.185875 4841 generic.go:334] "Generic (PLEG): container finished" podID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerID="82d0907d9e7a4092a3291a645c5f212871c8ba2be2559a8f82897962c95c9a82" exitCode=0 Dec 04 09:32:09 crc kubenswrapper[4841]: I1204 09:32:09.186305 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerDied","Data":"82d0907d9e7a4092a3291a645c5f212871c8ba2be2559a8f82897962c95c9a82"} Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.726843 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.860120 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities\") pod \"cd43601a-8eeb-4777-b1da-56e00fdd961e\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.860233 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd7t5\" (UniqueName: \"kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5\") pod \"cd43601a-8eeb-4777-b1da-56e00fdd961e\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.860263 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content\") pod \"cd43601a-8eeb-4777-b1da-56e00fdd961e\" (UID: \"cd43601a-8eeb-4777-b1da-56e00fdd961e\") " Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.861637 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities" (OuterVolumeSpecName: "utilities") pod "cd43601a-8eeb-4777-b1da-56e00fdd961e" (UID: "cd43601a-8eeb-4777-b1da-56e00fdd961e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.868554 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5" (OuterVolumeSpecName: "kube-api-access-bd7t5") pod "cd43601a-8eeb-4777-b1da-56e00fdd961e" (UID: "cd43601a-8eeb-4777-b1da-56e00fdd961e"). InnerVolumeSpecName "kube-api-access-bd7t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.916352 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd43601a-8eeb-4777-b1da-56e00fdd961e" (UID: "cd43601a-8eeb-4777-b1da-56e00fdd961e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.961497 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.961529 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd7t5\" (UniqueName: \"kubernetes.io/projected/cd43601a-8eeb-4777-b1da-56e00fdd961e-kube-api-access-bd7t5\") on node \"crc\" DevicePath \"\"" Dec 04 09:32:11 crc kubenswrapper[4841]: I1204 09:32:11.961541 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd43601a-8eeb-4777-b1da-56e00fdd961e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:32:12 crc kubenswrapper[4841]: E1204 09:32:12.007792 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Dec 04 09:32:12 crc kubenswrapper[4841]: E1204 09:32:12.008303 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(447eaad4-e25d-4d71-a0e2-f720640f3ba2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 09:32:12 crc kubenswrapper[4841]: E1204 09:32:12.009538 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.101661 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5hrcv"] Dec 04 09:32:12 crc kubenswrapper[4841]: W1204 09:32:12.108882 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3fa4e21_9119_47f0_a900_d0f08fd0fecf.slice/crio-6189cb0c08fa2c11af952adb65ff8b6b18c991d6dea3fc3616498839d0d10569 WatchSource:0}: Error finding container 6189cb0c08fa2c11af952adb65ff8b6b18c991d6dea3fc3616498839d0d10569: Status 404 returned error can't find the container with id 6189cb0c08fa2c11af952adb65ff8b6b18c991d6dea3fc3616498839d0d10569 Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.208426 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-5hrcv" event={"ID":"c3fa4e21-9119-47f0-a900-d0f08fd0fecf","Type":"ContainerStarted","Data":"6189cb0c08fa2c11af952adb65ff8b6b18c991d6dea3fc3616498839d0d10569"} Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.209709 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" event={"ID":"f9ab64cd-10d3-4143-90da-d4fe6e5525ab","Type":"ContainerStarted","Data":"b386ce22f1c7df1913f5cc2b28ae0652e6eb4c1c4ef0e274442d3803cc30cee0"} Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.211677 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" event={"ID":"0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae","Type":"ContainerStarted","Data":"f4e496f1de26585819da9e54fc605f10906df4608622f2f2850d07046268bafc"} Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.211790 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.220510 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk9zp" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.220949 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk9zp" event={"ID":"cd43601a-8eeb-4777-b1da-56e00fdd961e","Type":"ContainerDied","Data":"a6261830891a0b08331e385bde350bab48d2097a62d690e267387112757a7152"} Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.220986 4841 scope.go:117] "RemoveContainer" containerID="82d0907d9e7a4092a3291a645c5f212871c8ba2be2559a8f82897962c95c9a82" Dec 04 09:32:12 crc kubenswrapper[4841]: E1204 09:32:12.221809 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.234515 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-nz48b" podStartSLOduration=1.9093116110000001 podStartE2EDuration="21.23449165s" podCreationTimestamp="2025-12-04 09:31:51 +0000 UTC" firstStartedPulling="2025-12-04 09:31:52.596982443 +0000 UTC m=+779.348772647" lastFinishedPulling="2025-12-04 09:32:11.922162482 +0000 UTC m=+798.673952686" observedRunningTime="2025-12-04 09:32:12.230999914 +0000 UTC m=+798.982790128" watchObservedRunningTime="2025-12-04 09:32:12.23449165 +0000 UTC m=+798.986281854" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.245150 4841 scope.go:117] "RemoveContainer" containerID="2020a2ec3667fea719a6a25f4e382802b69649310609401c2a55a68e84c0f550" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.275101 4841 scope.go:117] "RemoveContainer" containerID="b59b9a1321c2755c4658c797ac5e9cfccdb1e12afdc0477d0b139a90bd62ae07" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.305291 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" podStartSLOduration=3.768654134 podStartE2EDuration="23.305272698s" podCreationTimestamp="2025-12-04 09:31:49 +0000 UTC" firstStartedPulling="2025-12-04 09:31:52.451626942 +0000 UTC m=+779.203417146" lastFinishedPulling="2025-12-04 09:32:11.988245506 +0000 UTC m=+798.740035710" observedRunningTime="2025-12-04 09:32:12.301977038 +0000 UTC m=+799.053767272" watchObservedRunningTime="2025-12-04 09:32:12.305272698 +0000 UTC m=+799.057062922" Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.320195 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.326154 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bk9zp"] Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.365692 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 04 09:32:12 crc kubenswrapper[4841]: I1204 09:32:12.395181 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 04 09:32:13 crc kubenswrapper[4841]: I1204 09:32:13.230388 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-5hrcv" event={"ID":"c3fa4e21-9119-47f0-a900-d0f08fd0fecf","Type":"ContainerStarted","Data":"55ad95b523eb09502b91705bb3c1e9bcb27757218261de4a24aafce73c8fe3fe"} Dec 04 09:32:13 crc kubenswrapper[4841]: E1204 09:32:13.232567 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" Dec 04 09:32:13 crc kubenswrapper[4841]: I1204 09:32:13.281810 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-5hrcv" podStartSLOduration=6.281795099 podStartE2EDuration="6.281795099s" podCreationTimestamp="2025-12-04 09:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:32:13.278484098 +0000 UTC m=+800.030274302" watchObservedRunningTime="2025-12-04 09:32:13.281795099 +0000 UTC m=+800.033585303" Dec 04 09:32:13 crc kubenswrapper[4841]: I1204 09:32:13.626637 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" path="/var/lib/kubelet/pods/cd43601a-8eeb-4777-b1da-56e00fdd961e/volumes" Dec 04 09:32:14 crc kubenswrapper[4841]: E1204 09:32:14.237367 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" Dec 04 09:32:19 crc kubenswrapper[4841]: I1204 09:32:19.550992 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-lbdpk" Dec 04 09:32:20 crc kubenswrapper[4841]: I1204 09:32:20.497487 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:32:20 crc kubenswrapper[4841]: I1204 09:32:20.497740 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:32:20 crc kubenswrapper[4841]: I1204 09:32:20.497884 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:32:20 crc kubenswrapper[4841]: I1204 09:32:20.498405 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:32:20 crc kubenswrapper[4841]: I1204 09:32:20.498531 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76" gracePeriod=600 Dec 04 09:32:21 crc kubenswrapper[4841]: I1204 09:32:21.284259 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76" exitCode=0 Dec 04 09:32:21 crc kubenswrapper[4841]: I1204 09:32:21.284301 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76"} Dec 04 09:32:21 crc kubenswrapper[4841]: I1204 09:32:21.285104 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4"} Dec 04 09:32:21 crc kubenswrapper[4841]: I1204 09:32:21.285129 4841 scope.go:117] "RemoveContainer" containerID="c46e17cfed7aa9799cd3b8c0ac668e71194f263a863afcf61170a430ac52530e" Dec 04 09:32:28 crc kubenswrapper[4841]: I1204 09:32:28.375406 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"447eaad4-e25d-4d71-a0e2-f720640f3ba2","Type":"ContainerStarted","Data":"c4a350855b1ee6b72f9c124d7ac4708e1e321e1f67b7698e3e424a8515c8b454"} Dec 04 09:32:30 crc kubenswrapper[4841]: I1204 09:32:30.392565 4841 generic.go:334] "Generic (PLEG): container finished" podID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" containerID="c4a350855b1ee6b72f9c124d7ac4708e1e321e1f67b7698e3e424a8515c8b454" exitCode=0 Dec 04 09:32:30 crc kubenswrapper[4841]: I1204 09:32:30.392655 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"447eaad4-e25d-4d71-a0e2-f720640f3ba2","Type":"ContainerDied","Data":"c4a350855b1ee6b72f9c124d7ac4708e1e321e1f67b7698e3e424a8515c8b454"} Dec 04 09:32:31 crc kubenswrapper[4841]: I1204 09:32:31.404285 4841 generic.go:334] "Generic (PLEG): container finished" podID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" containerID="4affe8f6de9642068fe0ec50ae9711c72be4fb249c40eb8e0680512f59cdfd67" exitCode=0 Dec 04 09:32:31 crc kubenswrapper[4841]: I1204 09:32:31.404340 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"447eaad4-e25d-4d71-a0e2-f720640f3ba2","Type":"ContainerDied","Data":"4affe8f6de9642068fe0ec50ae9711c72be4fb249c40eb8e0680512f59cdfd67"} Dec 04 09:32:32 crc kubenswrapper[4841]: I1204 09:32:32.415399 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"447eaad4-e25d-4d71-a0e2-f720640f3ba2","Type":"ContainerStarted","Data":"fc3eff6054ca8900946dcb1ee5834bbdde9123def208a64150919f9c243c5e40"} Dec 04 09:32:32 crc kubenswrapper[4841]: I1204 09:32:32.415941 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:32:32 crc kubenswrapper[4841]: I1204 09:32:32.470696 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.194322726 podStartE2EDuration="46.470680947s" podCreationTimestamp="2025-12-04 09:31:46 +0000 UTC" firstStartedPulling="2025-12-04 09:31:46.968385977 +0000 UTC m=+773.720176201" lastFinishedPulling="2025-12-04 09:32:27.244744218 +0000 UTC m=+813.996534422" observedRunningTime="2025-12-04 09:32:32.464681261 +0000 UTC m=+819.216471465" watchObservedRunningTime="2025-12-04 09:32:32.470680947 +0000 UTC m=+819.222471151" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.439561 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 04 09:32:34 crc kubenswrapper[4841]: E1204 09:32:34.440455 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="extract-utilities" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.440485 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="extract-utilities" Dec 04 09:32:34 crc kubenswrapper[4841]: E1204 09:32:34.440520 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="registry-server" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.440541 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="registry-server" Dec 04 09:32:34 crc kubenswrapper[4841]: E1204 09:32:34.440589 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="extract-content" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.440606 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="extract-content" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.440952 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd43601a-8eeb-4777-b1da-56e00fdd961e" containerName="registry-server" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.443308 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.447139 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.447204 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.447219 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.447163 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.459988 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.464199 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xjgtw" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483312 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483353 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483371 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483392 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483415 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483439 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xjgtw-push\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483458 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xjgtw-pull\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483478 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmwz\" (UniqueName: \"kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483496 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483512 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483534 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483555 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.483571 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584454 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584513 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xjgtw-push\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584540 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xjgtw-pull\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584563 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmwz\" (UniqueName: \"kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584583 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584607 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584637 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584667 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584683 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584688 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584716 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584736 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584754 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.584785 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585150 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585250 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585570 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585597 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585707 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585868 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.585996 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.586320 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.603079 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xjgtw-pull\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.603192 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xjgtw-push\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.603332 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.607280 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmwz\" (UniqueName: \"kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:34 crc kubenswrapper[4841]: I1204 09:32:34.768223 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:32:35 crc kubenswrapper[4841]: I1204 09:32:35.091161 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 04 09:32:35 crc kubenswrapper[4841]: I1204 09:32:35.439341 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerStarted","Data":"d30941f3919f7f505ba2f2a257458ed792fff3c86c5f770465278a6509e6865c"} Dec 04 09:32:41 crc kubenswrapper[4841]: I1204 09:32:41.824080 4841 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="447eaad4-e25d-4d71-a0e2-f720640f3ba2" containerName="elasticsearch" probeResult="failure" output=< Dec 04 09:32:41 crc kubenswrapper[4841]: {"timestamp": "2025-12-04T09:32:41+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 04 09:32:41 crc kubenswrapper[4841]: > Dec 04 09:32:42 crc kubenswrapper[4841]: I1204 09:32:42.490458 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerStarted","Data":"c42736e17b192288c0d5319dd9ddcfead13736c1d43eee20ca9dbf18171df14f"} Dec 04 09:32:43 crc kubenswrapper[4841]: I1204 09:32:43.497712 4841 generic.go:334] "Generic (PLEG): container finished" podID="6f6b00a4-ef75-4479-b809-47eb43546686" containerID="c42736e17b192288c0d5319dd9ddcfead13736c1d43eee20ca9dbf18171df14f" exitCode=0 Dec 04 09:32:43 crc kubenswrapper[4841]: I1204 09:32:43.497790 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerDied","Data":"c42736e17b192288c0d5319dd9ddcfead13736c1d43eee20ca9dbf18171df14f"} Dec 04 09:32:44 crc kubenswrapper[4841]: I1204 09:32:44.506838 4841 generic.go:334] "Generic (PLEG): container finished" podID="6f6b00a4-ef75-4479-b809-47eb43546686" containerID="58a4cc7e12ae8cfe180676ff06b4ea2a2c110567c9dc2c57e1d3c33551c6459b" exitCode=0 Dec 04 09:32:44 crc kubenswrapper[4841]: I1204 09:32:44.506916 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerDied","Data":"58a4cc7e12ae8cfe180676ff06b4ea2a2c110567c9dc2c57e1d3c33551c6459b"} Dec 04 09:32:44 crc kubenswrapper[4841]: I1204 09:32:44.582651 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_6f6b00a4-ef75-4479-b809-47eb43546686/manage-dockerfile/0.log" Dec 04 09:32:45 crc kubenswrapper[4841]: I1204 09:32:45.518157 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerStarted","Data":"cf6f0e1905a69b9cd9e14dc5f5bfcd5ca9adacd13db0bb0930dcb80d7b66c399"} Dec 04 09:32:45 crc kubenswrapper[4841]: I1204 09:32:45.576876 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=4.551368151 podStartE2EDuration="11.576847922s" podCreationTimestamp="2025-12-04 09:32:34 +0000 UTC" firstStartedPulling="2025-12-04 09:32:35.105039978 +0000 UTC m=+821.856830222" lastFinishedPulling="2025-12-04 09:32:42.130519789 +0000 UTC m=+828.882309993" observedRunningTime="2025-12-04 09:32:45.57224839 +0000 UTC m=+832.324038634" watchObservedRunningTime="2025-12-04 09:32:45.576847922 +0000 UTC m=+832.328638196" Dec 04 09:32:46 crc kubenswrapper[4841]: I1204 09:32:46.989477 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 04 09:33:22 crc kubenswrapper[4841]: I1204 09:33:22.936855 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:22 crc kubenswrapper[4841]: I1204 09:33:22.938882 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:22 crc kubenswrapper[4841]: I1204 09:33:22.958545 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.022053 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjzws\" (UniqueName: \"kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.022151 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.022181 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.122946 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.123010 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.123074 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjzws\" (UniqueName: \"kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.123964 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.124286 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.146684 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjzws\" (UniqueName: \"kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws\") pod \"redhat-operators-6t27v\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.272842 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.672471 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:23 crc kubenswrapper[4841]: I1204 09:33:23.818586 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerStarted","Data":"5c12e855981057172a97834a685f946324954ff3d1f32f13be4765288169790d"} Dec 04 09:33:25 crc kubenswrapper[4841]: I1204 09:33:25.851440 4841 generic.go:334] "Generic (PLEG): container finished" podID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerID="af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e" exitCode=0 Dec 04 09:33:25 crc kubenswrapper[4841]: I1204 09:33:25.851669 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerDied","Data":"af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e"} Dec 04 09:33:28 crc kubenswrapper[4841]: I1204 09:33:28.873923 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerStarted","Data":"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be"} Dec 04 09:33:29 crc kubenswrapper[4841]: I1204 09:33:29.892462 4841 generic.go:334] "Generic (PLEG): container finished" podID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerID="f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be" exitCode=0 Dec 04 09:33:29 crc kubenswrapper[4841]: I1204 09:33:29.892595 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerDied","Data":"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be"} Dec 04 09:33:30 crc kubenswrapper[4841]: I1204 09:33:30.903673 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerStarted","Data":"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734"} Dec 04 09:33:30 crc kubenswrapper[4841]: I1204 09:33:30.930077 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6t27v" podStartSLOduration=4.317772906 podStartE2EDuration="8.930053396s" podCreationTimestamp="2025-12-04 09:33:22 +0000 UTC" firstStartedPulling="2025-12-04 09:33:25.855282341 +0000 UTC m=+872.607072545" lastFinishedPulling="2025-12-04 09:33:30.467562831 +0000 UTC m=+877.219353035" observedRunningTime="2025-12-04 09:33:30.929482293 +0000 UTC m=+877.681272507" watchObservedRunningTime="2025-12-04 09:33:30.930053396 +0000 UTC m=+877.681843640" Dec 04 09:33:33 crc kubenswrapper[4841]: I1204 09:33:33.273590 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:33 crc kubenswrapper[4841]: I1204 09:33:33.274044 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:34 crc kubenswrapper[4841]: I1204 09:33:34.323353 4841 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t27v" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="registry-server" probeResult="failure" output=< Dec 04 09:33:34 crc kubenswrapper[4841]: timeout: failed to connect service ":50051" within 1s Dec 04 09:33:34 crc kubenswrapper[4841]: > Dec 04 09:33:43 crc kubenswrapper[4841]: I1204 09:33:43.345171 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:43 crc kubenswrapper[4841]: I1204 09:33:43.426288 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:43 crc kubenswrapper[4841]: I1204 09:33:43.595108 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:45 crc kubenswrapper[4841]: I1204 09:33:45.012200 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6t27v" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="registry-server" containerID="cri-o://3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734" gracePeriod=2 Dec 04 09:33:45 crc kubenswrapper[4841]: I1204 09:33:45.983736 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.023711 4841 generic.go:334] "Generic (PLEG): container finished" podID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerID="3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734" exitCode=0 Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.023799 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerDied","Data":"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734"} Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.023838 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t27v" event={"ID":"3151b0ab-c890-45b5-939d-1ef740ac05e0","Type":"ContainerDied","Data":"5c12e855981057172a97834a685f946324954ff3d1f32f13be4765288169790d"} Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.023867 4841 scope.go:117] "RemoveContainer" containerID="3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.024065 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t27v" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.054462 4841 scope.go:117] "RemoveContainer" containerID="f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.080573 4841 scope.go:117] "RemoveContainer" containerID="af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.092925 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities\") pod \"3151b0ab-c890-45b5-939d-1ef740ac05e0\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.093061 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content\") pod \"3151b0ab-c890-45b5-939d-1ef740ac05e0\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.093338 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjzws\" (UniqueName: \"kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws\") pod \"3151b0ab-c890-45b5-939d-1ef740ac05e0\" (UID: \"3151b0ab-c890-45b5-939d-1ef740ac05e0\") " Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.094484 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities" (OuterVolumeSpecName: "utilities") pod "3151b0ab-c890-45b5-939d-1ef740ac05e0" (UID: "3151b0ab-c890-45b5-939d-1ef740ac05e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.100070 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws" (OuterVolumeSpecName: "kube-api-access-kjzws") pod "3151b0ab-c890-45b5-939d-1ef740ac05e0" (UID: "3151b0ab-c890-45b5-939d-1ef740ac05e0"). InnerVolumeSpecName "kube-api-access-kjzws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.112386 4841 scope.go:117] "RemoveContainer" containerID="3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734" Dec 04 09:33:46 crc kubenswrapper[4841]: E1204 09:33:46.113988 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734\": container with ID starting with 3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734 not found: ID does not exist" containerID="3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.114065 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734"} err="failed to get container status \"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734\": rpc error: code = NotFound desc = could not find container \"3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734\": container with ID starting with 3893d2f0387cbf3f360f808de3cb623b5f2cdca027be6bd74f0f2e1ee5789734 not found: ID does not exist" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.114114 4841 scope.go:117] "RemoveContainer" containerID="f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be" Dec 04 09:33:46 crc kubenswrapper[4841]: E1204 09:33:46.114736 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be\": container with ID starting with f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be not found: ID does not exist" containerID="f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.114839 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be"} err="failed to get container status \"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be\": rpc error: code = NotFound desc = could not find container \"f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be\": container with ID starting with f1a7665831a38358b3c26df6b74cfac72ef6ad0db9ae87297801e49fe040f8be not found: ID does not exist" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.114896 4841 scope.go:117] "RemoveContainer" containerID="af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e" Dec 04 09:33:46 crc kubenswrapper[4841]: E1204 09:33:46.115424 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e\": container with ID starting with af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e not found: ID does not exist" containerID="af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.115470 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e"} err="failed to get container status \"af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e\": rpc error: code = NotFound desc = could not find container \"af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e\": container with ID starting with af16f8cb0a0c3bac31c79fee8df714a9d602dfb03987080e604bbcb81438296e not found: ID does not exist" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.195044 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjzws\" (UniqueName: \"kubernetes.io/projected/3151b0ab-c890-45b5-939d-1ef740ac05e0-kube-api-access-kjzws\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.195080 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.257831 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3151b0ab-c890-45b5-939d-1ef740ac05e0" (UID: "3151b0ab-c890-45b5-939d-1ef740ac05e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.296239 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3151b0ab-c890-45b5-939d-1ef740ac05e0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.369680 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:46 crc kubenswrapper[4841]: I1204 09:33:46.377937 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6t27v"] Dec 04 09:33:47 crc kubenswrapper[4841]: I1204 09:33:47.030861 4841 generic.go:334] "Generic (PLEG): container finished" podID="6f6b00a4-ef75-4479-b809-47eb43546686" containerID="cf6f0e1905a69b9cd9e14dc5f5bfcd5ca9adacd13db0bb0930dcb80d7b66c399" exitCode=0 Dec 04 09:33:47 crc kubenswrapper[4841]: I1204 09:33:47.030932 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerDied","Data":"cf6f0e1905a69b9cd9e14dc5f5bfcd5ca9adacd13db0bb0930dcb80d7b66c399"} Dec 04 09:33:47 crc kubenswrapper[4841]: I1204 09:33:47.631033 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" path="/var/lib/kubelet/pods/3151b0ab-c890-45b5-939d-1ef740ac05e0/volumes" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.324800 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.430825 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xjgtw-push\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.430883 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xjgtw-pull\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.430922 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.430958 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431025 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431065 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431147 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431214 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431415 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431472 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431461 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431516 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431543 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431576 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmwz\" (UniqueName: \"kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz\") pod \"6f6b00a4-ef75-4479-b809-47eb43546686\" (UID: \"6f6b00a4-ef75-4479-b809-47eb43546686\") " Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.431583 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432368 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432539 4841 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432574 4841 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432588 4841 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f6b00a4-ef75-4479-b809-47eb43546686-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432541 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.432663 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.433461 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.433818 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.439713 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull" (OuterVolumeSpecName: "builder-dockercfg-xjgtw-pull") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "builder-dockercfg-xjgtw-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.439812 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.440549 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz" (OuterVolumeSpecName: "kube-api-access-vcmwz") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "kube-api-access-vcmwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.445131 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push" (OuterVolumeSpecName: "builder-dockercfg-xjgtw-push") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "builder-dockercfg-xjgtw-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.533887 4841 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534748 4841 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534903 4841 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534918 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmwz\" (UniqueName: \"kubernetes.io/projected/6f6b00a4-ef75-4479-b809-47eb43546686-kube-api-access-vcmwz\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534934 4841 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xjgtw-push\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-push\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534948 4841 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xjgtw-pull\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-builder-dockercfg-xjgtw-pull\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534973 4841 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6f6b00a4-ef75-4479-b809-47eb43546686-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.534991 4841 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6f6b00a4-ef75-4479-b809-47eb43546686-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.639957 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:48 crc kubenswrapper[4841]: I1204 09:33:48.736561 4841 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.057242 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6f6b00a4-ef75-4479-b809-47eb43546686","Type":"ContainerDied","Data":"d30941f3919f7f505ba2f2a257458ed792fff3c86c5f770465278a6509e6865c"} Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.057323 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30941f3919f7f505ba2f2a257458ed792fff3c86c5f770465278a6509e6865c" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.057381 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.808065 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.808386 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="registry-server" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.808411 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="registry-server" Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.808435 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="extract-content" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.808450 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="extract-content" Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.808605 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="git-clone" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.810023 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="git-clone" Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.810090 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="extract-utilities" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.810106 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="extract-utilities" Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.810172 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="manage-dockerfile" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.810186 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="manage-dockerfile" Dec 04 09:33:49 crc kubenswrapper[4841]: E1204 09:33:49.810209 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="docker-build" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.810260 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="docker-build" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.811571 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="3151b0ab-c890-45b5-939d-1ef740ac05e0" containerName="registry-server" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.811731 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6b00a4-ef75-4479-b809-47eb43546686" containerName="docker-build" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.812728 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.816101 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-ptwd4" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.828215 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.856167 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh4fd\" (UniqueName: \"kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd\") pod \"infrawatch-operators-msd4r\" (UID: \"bd98de75-a6be-485e-a2b6-fd86e919b933\") " pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.958247 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh4fd\" (UniqueName: \"kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd\") pod \"infrawatch-operators-msd4r\" (UID: \"bd98de75-a6be-485e-a2b6-fd86e919b933\") " pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:33:49 crc kubenswrapper[4841]: I1204 09:33:49.980125 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh4fd\" (UniqueName: \"kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd\") pod \"infrawatch-operators-msd4r\" (UID: \"bd98de75-a6be-485e-a2b6-fd86e919b933\") " pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:33:50 crc kubenswrapper[4841]: I1204 09:33:50.077640 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6f6b00a4-ef75-4479-b809-47eb43546686" (UID: "6f6b00a4-ef75-4479-b809-47eb43546686"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:33:50 crc kubenswrapper[4841]: I1204 09:33:50.144963 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:33:50 crc kubenswrapper[4841]: I1204 09:33:50.162401 4841 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6f6b00a4-ef75-4479-b809-47eb43546686-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 04 09:33:50 crc kubenswrapper[4841]: I1204 09:33:50.633891 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:33:51 crc kubenswrapper[4841]: I1204 09:33:51.072284 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-msd4r" event={"ID":"bd98de75-a6be-485e-a2b6-fd86e919b933","Type":"ContainerStarted","Data":"2c7ac179c1b791a8b91183604a498d3a35adab5616a301f1a23e3ca1492a2112"} Dec 04 09:33:54 crc kubenswrapper[4841]: I1204 09:33:54.419395 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.396697 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-466xm"] Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.398987 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.405130 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-466xm"] Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.539847 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr5c\" (UniqueName: \"kubernetes.io/projected/c2a10410-1246-4af9-b4df-5074bbc095c5-kube-api-access-gmr5c\") pod \"infrawatch-operators-466xm\" (UID: \"c2a10410-1246-4af9-b4df-5074bbc095c5\") " pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.641012 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr5c\" (UniqueName: \"kubernetes.io/projected/c2a10410-1246-4af9-b4df-5074bbc095c5-kube-api-access-gmr5c\") pod \"infrawatch-operators-466xm\" (UID: \"c2a10410-1246-4af9-b4df-5074bbc095c5\") " pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.659549 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr5c\" (UniqueName: \"kubernetes.io/projected/c2a10410-1246-4af9-b4df-5074bbc095c5-kube-api-access-gmr5c\") pod \"infrawatch-operators-466xm\" (UID: \"c2a10410-1246-4af9-b4df-5074bbc095c5\") " pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:33:55 crc kubenswrapper[4841]: I1204 09:33:55.721651 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:34:05 crc kubenswrapper[4841]: I1204 09:34:05.516754 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-466xm"] Dec 04 09:34:05 crc kubenswrapper[4841]: E1204 09:34:05.581539 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 04 09:34:05 crc kubenswrapper[4841]: E1204 09:34:05.582353 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hh4fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-msd4r_service-telemetry(bd98de75-a6be-485e-a2b6-fd86e919b933): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 09:34:05 crc kubenswrapper[4841]: E1204 09:34:05.583952 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/infrawatch-operators-msd4r" podUID="bd98de75-a6be-485e-a2b6-fd86e919b933" Dec 04 09:34:06 crc kubenswrapper[4841]: I1204 09:34:06.172869 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-466xm" event={"ID":"c2a10410-1246-4af9-b4df-5074bbc095c5","Type":"ContainerStarted","Data":"3bcca20d2dc39ea8e8ad9c460e1a5e072f5a1f1932220c4c43d8138ffae3474f"} Dec 04 09:34:06 crc kubenswrapper[4841]: I1204 09:34:06.523233 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:34:06 crc kubenswrapper[4841]: I1204 09:34:06.629682 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh4fd\" (UniqueName: \"kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd\") pod \"bd98de75-a6be-485e-a2b6-fd86e919b933\" (UID: \"bd98de75-a6be-485e-a2b6-fd86e919b933\") " Dec 04 09:34:06 crc kubenswrapper[4841]: I1204 09:34:06.639086 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd" (OuterVolumeSpecName: "kube-api-access-hh4fd") pod "bd98de75-a6be-485e-a2b6-fd86e919b933" (UID: "bd98de75-a6be-485e-a2b6-fd86e919b933"). InnerVolumeSpecName "kube-api-access-hh4fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:34:06 crc kubenswrapper[4841]: I1204 09:34:06.735386 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh4fd\" (UniqueName: \"kubernetes.io/projected/bd98de75-a6be-485e-a2b6-fd86e919b933-kube-api-access-hh4fd\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.180835 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-466xm" event={"ID":"c2a10410-1246-4af9-b4df-5074bbc095c5","Type":"ContainerStarted","Data":"e75f0b0d0e76d84acb4e335193ddf9a49e0863eefd06f42f356faf762dfb5a0d"} Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.181791 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-msd4r" event={"ID":"bd98de75-a6be-485e-a2b6-fd86e919b933","Type":"ContainerDied","Data":"2c7ac179c1b791a8b91183604a498d3a35adab5616a301f1a23e3ca1492a2112"} Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.181835 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-msd4r" Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.203964 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-466xm" podStartSLOduration=11.185698663 podStartE2EDuration="12.203938949s" podCreationTimestamp="2025-12-04 09:33:55 +0000 UTC" firstStartedPulling="2025-12-04 09:34:05.526305869 +0000 UTC m=+912.278096073" lastFinishedPulling="2025-12-04 09:34:06.544546155 +0000 UTC m=+913.296336359" observedRunningTime="2025-12-04 09:34:07.198290942 +0000 UTC m=+913.950081176" watchObservedRunningTime="2025-12-04 09:34:07.203938949 +0000 UTC m=+913.955729183" Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.264503 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.269627 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-msd4r"] Dec 04 09:34:07 crc kubenswrapper[4841]: I1204 09:34:07.630108 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd98de75-a6be-485e-a2b6-fd86e919b933" path="/var/lib/kubelet/pods/bd98de75-a6be-485e-a2b6-fd86e919b933/volumes" Dec 04 09:34:15 crc kubenswrapper[4841]: I1204 09:34:15.722878 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:34:15 crc kubenswrapper[4841]: I1204 09:34:15.723821 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:34:15 crc kubenswrapper[4841]: I1204 09:34:15.762782 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:34:16 crc kubenswrapper[4841]: I1204 09:34:16.289006 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-466xm" Dec 04 09:34:20 crc kubenswrapper[4841]: I1204 09:34:20.497403 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:34:20 crc kubenswrapper[4841]: I1204 09:34:20.497794 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.384277 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7"] Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.385916 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.390878 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.391454 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7"] Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.567224 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tctcl\" (UniqueName: \"kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.567669 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.567912 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.669796 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tctcl\" (UniqueName: \"kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.670366 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.671544 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.671292 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.672101 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.699294 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tctcl\" (UniqueName: \"kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:28 crc kubenswrapper[4841]: I1204 09:34:28.706849 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.170406 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7"] Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.353081 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" event={"ID":"e5b35625-5dbb-4f76-960d-04ac80bd487c","Type":"ContainerStarted","Data":"88c2237f6864e463f9fb84b3df95b4dba5426888b0148d327bb8fe2be09e3316"} Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.394133 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq"] Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.396348 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.406375 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq"] Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.482622 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.483124 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.483418 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxvp\" (UniqueName: \"kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.584567 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rxvp\" (UniqueName: \"kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.584653 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.584713 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.585457 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.585692 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.615223 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rxvp\" (UniqueName: \"kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:29 crc kubenswrapper[4841]: I1204 09:34:29.716383 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.005772 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq"] Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.362367 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" event={"ID":"fe373d7e-286e-48bc-b354-6f5211d7a691","Type":"ContainerStarted","Data":"247e29711a9a620480acd89695e6014f833df03320a8fedbcf1dbac58c2f2d7f"} Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.383970 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v"] Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.386158 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.417490 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v"] Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.497193 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.497252 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.497304 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmzh\" (UniqueName: \"kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.598111 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.598180 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmzh\" (UniqueName: \"kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.598250 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.598727 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.598736 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.616196 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmzh\" (UniqueName: \"kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.701515 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:30 crc kubenswrapper[4841]: I1204 09:34:30.954018 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v"] Dec 04 09:34:31 crc kubenswrapper[4841]: W1204 09:34:31.006361 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d14745b_966a_46dc_acf2_e92345f1a18b.slice/crio-40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d WatchSource:0}: Error finding container 40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d: Status 404 returned error can't find the container with id 40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d Dec 04 09:34:31 crc kubenswrapper[4841]: I1204 09:34:31.371059 4841 generic.go:334] "Generic (PLEG): container finished" podID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerID="f9c2811ba22d0bd9e11a23f28dc3bbfb87f40c5486a60f85e6090041a511d11c" exitCode=0 Dec 04 09:34:31 crc kubenswrapper[4841]: I1204 09:34:31.371183 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" event={"ID":"e5b35625-5dbb-4f76-960d-04ac80bd487c","Type":"ContainerDied","Data":"f9c2811ba22d0bd9e11a23f28dc3bbfb87f40c5486a60f85e6090041a511d11c"} Dec 04 09:34:31 crc kubenswrapper[4841]: I1204 09:34:31.373196 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" event={"ID":"4d14745b-966a-46dc-acf2-e92345f1a18b","Type":"ContainerStarted","Data":"40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d"} Dec 04 09:34:31 crc kubenswrapper[4841]: I1204 09:34:31.376159 4841 generic.go:334] "Generic (PLEG): container finished" podID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerID="f26731330bd1f0bd9518a66417443f5db1acbcdbb7f626350c891103056f4b4b" exitCode=0 Dec 04 09:34:31 crc kubenswrapper[4841]: I1204 09:34:31.376198 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" event={"ID":"fe373d7e-286e-48bc-b354-6f5211d7a691","Type":"ContainerDied","Data":"f26731330bd1f0bd9518a66417443f5db1acbcdbb7f626350c891103056f4b4b"} Dec 04 09:34:32 crc kubenswrapper[4841]: I1204 09:34:32.384884 4841 generic.go:334] "Generic (PLEG): container finished" podID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerID="eab0c15bc607a4fd417990ea3a6848457035e24bd4f0b5a49191d4bf54c17f8f" exitCode=0 Dec 04 09:34:32 crc kubenswrapper[4841]: I1204 09:34:32.385007 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" event={"ID":"fe373d7e-286e-48bc-b354-6f5211d7a691","Type":"ContainerDied","Data":"eab0c15bc607a4fd417990ea3a6848457035e24bd4f0b5a49191d4bf54c17f8f"} Dec 04 09:34:32 crc kubenswrapper[4841]: I1204 09:34:32.389023 4841 generic.go:334] "Generic (PLEG): container finished" podID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerID="4b831a50e8fb62ae8f7ad6180eed5de6cfffe6e9920ba37b4ff5a8faedeffd93" exitCode=0 Dec 04 09:34:32 crc kubenswrapper[4841]: I1204 09:34:32.389090 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" event={"ID":"4d14745b-966a-46dc-acf2-e92345f1a18b","Type":"ContainerDied","Data":"4b831a50e8fb62ae8f7ad6180eed5de6cfffe6e9920ba37b4ff5a8faedeffd93"} Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.397075 4841 generic.go:334] "Generic (PLEG): container finished" podID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerID="6b1d9fae08d9ce60928fb37372ed88eab00fc6d19a8599c925a2caa69bce1549" exitCode=0 Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.397330 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" event={"ID":"fe373d7e-286e-48bc-b354-6f5211d7a691","Type":"ContainerDied","Data":"6b1d9fae08d9ce60928fb37372ed88eab00fc6d19a8599c925a2caa69bce1549"} Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.400347 4841 generic.go:334] "Generic (PLEG): container finished" podID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerID="d97066f16ded3c0902fef785c57280cb48593acbc130bf1e90831f8fc179a312" exitCode=0 Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.400401 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" event={"ID":"e5b35625-5dbb-4f76-960d-04ac80bd487c","Type":"ContainerDied","Data":"d97066f16ded3c0902fef785c57280cb48593acbc130bf1e90831f8fc179a312"} Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.403074 4841 generic.go:334] "Generic (PLEG): container finished" podID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerID="09d25211cdbf58749e7a0918a66a74d4adea589a24bfc98530ee69c360815caa" exitCode=0 Dec 04 09:34:33 crc kubenswrapper[4841]: I1204 09:34:33.403130 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" event={"ID":"4d14745b-966a-46dc-acf2-e92345f1a18b","Type":"ContainerDied","Data":"09d25211cdbf58749e7a0918a66a74d4adea589a24bfc98530ee69c360815caa"} Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.418079 4841 generic.go:334] "Generic (PLEG): container finished" podID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerID="b89343f0d25a026c5dfaf1cf8d1bed389e625312497c204451bad27eb7253b3f" exitCode=0 Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.418193 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" event={"ID":"e5b35625-5dbb-4f76-960d-04ac80bd487c","Type":"ContainerDied","Data":"b89343f0d25a026c5dfaf1cf8d1bed389e625312497c204451bad27eb7253b3f"} Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.422045 4841 generic.go:334] "Generic (PLEG): container finished" podID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerID="e006ffe089919b2d87eafa3d41dcd82a48e6a54470f8602ec612ffb1a3091c58" exitCode=0 Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.422106 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" event={"ID":"4d14745b-966a-46dc-acf2-e92345f1a18b","Type":"ContainerDied","Data":"e006ffe089919b2d87eafa3d41dcd82a48e6a54470f8602ec612ffb1a3091c58"} Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.795530 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.858870 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rxvp\" (UniqueName: \"kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp\") pod \"fe373d7e-286e-48bc-b354-6f5211d7a691\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.859028 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util\") pod \"fe373d7e-286e-48bc-b354-6f5211d7a691\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.859056 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle\") pod \"fe373d7e-286e-48bc-b354-6f5211d7a691\" (UID: \"fe373d7e-286e-48bc-b354-6f5211d7a691\") " Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.859846 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle" (OuterVolumeSpecName: "bundle") pod "fe373d7e-286e-48bc-b354-6f5211d7a691" (UID: "fe373d7e-286e-48bc-b354-6f5211d7a691"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.864563 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp" (OuterVolumeSpecName: "kube-api-access-6rxvp") pod "fe373d7e-286e-48bc-b354-6f5211d7a691" (UID: "fe373d7e-286e-48bc-b354-6f5211d7a691"). InnerVolumeSpecName "kube-api-access-6rxvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.894616 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util" (OuterVolumeSpecName: "util") pod "fe373d7e-286e-48bc-b354-6f5211d7a691" (UID: "fe373d7e-286e-48bc-b354-6f5211d7a691"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.961033 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.961095 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe373d7e-286e-48bc-b354-6f5211d7a691-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:34 crc kubenswrapper[4841]: I1204 09:34:34.961117 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rxvp\" (UniqueName: \"kubernetes.io/projected/fe373d7e-286e-48bc-b354-6f5211d7a691-kube-api-access-6rxvp\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.435033 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" event={"ID":"fe373d7e-286e-48bc-b354-6f5211d7a691","Type":"ContainerDied","Data":"247e29711a9a620480acd89695e6014f833df03320a8fedbcf1dbac58c2f2d7f"} Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.435503 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247e29711a9a620480acd89695e6014f833df03320a8fedbcf1dbac58c2f2d7f" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.435062 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebmmqlq" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.841821 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.855176 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974564 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util\") pod \"e5b35625-5dbb-4f76-960d-04ac80bd487c\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974631 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmzh\" (UniqueName: \"kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh\") pod \"4d14745b-966a-46dc-acf2-e92345f1a18b\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974736 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle\") pod \"4d14745b-966a-46dc-acf2-e92345f1a18b\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974830 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle\") pod \"e5b35625-5dbb-4f76-960d-04ac80bd487c\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974877 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tctcl\" (UniqueName: \"kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl\") pod \"e5b35625-5dbb-4f76-960d-04ac80bd487c\" (UID: \"e5b35625-5dbb-4f76-960d-04ac80bd487c\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.974908 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util\") pod \"4d14745b-966a-46dc-acf2-e92345f1a18b\" (UID: \"4d14745b-966a-46dc-acf2-e92345f1a18b\") " Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.975823 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle" (OuterVolumeSpecName: "bundle") pod "4d14745b-966a-46dc-acf2-e92345f1a18b" (UID: "4d14745b-966a-46dc-acf2-e92345f1a18b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.975969 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle" (OuterVolumeSpecName: "bundle") pod "e5b35625-5dbb-4f76-960d-04ac80bd487c" (UID: "e5b35625-5dbb-4f76-960d-04ac80bd487c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.980580 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl" (OuterVolumeSpecName: "kube-api-access-tctcl") pod "e5b35625-5dbb-4f76-960d-04ac80bd487c" (UID: "e5b35625-5dbb-4f76-960d-04ac80bd487c"). InnerVolumeSpecName "kube-api-access-tctcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:34:35 crc kubenswrapper[4841]: I1204 09:34:35.981201 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh" (OuterVolumeSpecName: "kube-api-access-wgmzh") pod "4d14745b-966a-46dc-acf2-e92345f1a18b" (UID: "4d14745b-966a-46dc-acf2-e92345f1a18b"). InnerVolumeSpecName "kube-api-access-wgmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.004403 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util" (OuterVolumeSpecName: "util") pod "4d14745b-966a-46dc-acf2-e92345f1a18b" (UID: "4d14745b-966a-46dc-acf2-e92345f1a18b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.076184 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.076230 4841 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.076250 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tctcl\" (UniqueName: \"kubernetes.io/projected/e5b35625-5dbb-4f76-960d-04ac80bd487c-kube-api-access-tctcl\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.076270 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d14745b-966a-46dc-acf2-e92345f1a18b-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.076287 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmzh\" (UniqueName: \"kubernetes.io/projected/4d14745b-966a-46dc-acf2-e92345f1a18b-kube-api-access-wgmzh\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.175957 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util" (OuterVolumeSpecName: "util") pod "e5b35625-5dbb-4f76-960d-04ac80bd487c" (UID: "e5b35625-5dbb-4f76-960d-04ac80bd487c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.177205 4841 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b35625-5dbb-4f76-960d-04ac80bd487c-util\") on node \"crc\" DevicePath \"\"" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.445833 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" event={"ID":"e5b35625-5dbb-4f76-960d-04ac80bd487c","Type":"ContainerDied","Data":"88c2237f6864e463f9fb84b3df95b4dba5426888b0148d327bb8fe2be09e3316"} Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.445874 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.445887 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c2237f6864e463f9fb84b3df95b4dba5426888b0148d327bb8fe2be09e3316" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.448801 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" event={"ID":"4d14745b-966a-46dc-acf2-e92345f1a18b","Type":"ContainerDied","Data":"40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d"} Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.448856 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40956b916486e3e07840cbadab3209d02618785d823d632a9718c236b8096c4d" Dec 04 09:34:36 crc kubenswrapper[4841]: I1204 09:34:36.448885 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bv9s7v" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.583658 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt"] Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584375 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584391 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584406 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584414 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584424 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584432 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584449 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584456 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584467 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584474 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584486 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584494 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584506 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584514 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584528 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584536 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="pull" Dec 04 09:34:39 crc kubenswrapper[4841]: E1204 09:34:39.584549 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584557 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="util" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584677 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe373d7e-286e-48bc-b354-6f5211d7a691" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584692 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d14745b-966a-46dc-acf2-e92345f1a18b" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.584709 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b35625-5dbb-4f76-960d-04ac80bd487c" containerName="extract" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.585174 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.587176 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-d7nlf" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.628009 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8gs\" (UniqueName: \"kubernetes.io/projected/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-kube-api-access-7g8gs\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.628207 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-runner\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.632595 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt"] Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.729242 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-runner\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.729517 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g8gs\" (UniqueName: \"kubernetes.io/projected/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-kube-api-access-7g8gs\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.730063 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-runner\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.755021 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g8gs\" (UniqueName: \"kubernetes.io/projected/2ff86174-988f-4af5-a9cb-2a8a1b6feb5d-kube-api-access-7g8gs\") pod \"smart-gateway-operator-595bcf4c87-mp4jt\" (UID: \"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d\") " pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:39 crc kubenswrapper[4841]: I1204 09:34:39.903631 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" Dec 04 09:34:40 crc kubenswrapper[4841]: I1204 09:34:40.229755 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt"] Dec 04 09:34:40 crc kubenswrapper[4841]: I1204 09:34:40.475279 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" event={"ID":"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d","Type":"ContainerStarted","Data":"4e5742eeed831231f1d2f802e09ff6f081047b288499b1f33c9ee60eb183baf0"} Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.450585 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vpp27"] Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.451873 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.456627 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-zxch5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.474059 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vpp27"] Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.524602 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.530017 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.536532 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.573395 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x6j\" (UniqueName: \"kubernetes.io/projected/6f985179-ff60-499f-a53c-4d14ed90c3d6-kube-api-access-84x6j\") pod \"interconnect-operator-5bb49f789d-vpp27\" (UID: \"6f985179-ff60-499f-a53c-4d14ed90c3d6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.675485 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjc9\" (UniqueName: \"kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.675544 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84x6j\" (UniqueName: \"kubernetes.io/projected/6f985179-ff60-499f-a53c-4d14ed90c3d6-kube-api-access-84x6j\") pod \"interconnect-operator-5bb49f789d-vpp27\" (UID: \"6f985179-ff60-499f-a53c-4d14ed90c3d6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.675590 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.676060 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.719681 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84x6j\" (UniqueName: \"kubernetes.io/projected/6f985179-ff60-499f-a53c-4d14ed90c3d6-kube-api-access-84x6j\") pod \"interconnect-operator-5bb49f789d-vpp27\" (UID: \"6f985179-ff60-499f-a53c-4d14ed90c3d6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.769451 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.777436 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjc9\" (UniqueName: \"kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.777528 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.777556 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.778156 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.778178 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.800797 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjc9\" (UniqueName: \"kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9\") pod \"community-operators-vkdp5\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:41 crc kubenswrapper[4841]: I1204 09:34:41.852057 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.263909 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vpp27"] Dec 04 09:34:42 crc kubenswrapper[4841]: W1204 09:34:42.285946 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f985179_ff60_499f_a53c_4d14ed90c3d6.slice/crio-3c0edb7d8dccd2ea27ce14878f02e375c08bc9cd54f51cd512fbdf117df39af3 WatchSource:0}: Error finding container 3c0edb7d8dccd2ea27ce14878f02e375c08bc9cd54f51cd512fbdf117df39af3: Status 404 returned error can't find the container with id 3c0edb7d8dccd2ea27ce14878f02e375c08bc9cd54f51cd512fbdf117df39af3 Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.309346 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:34:42 crc kubenswrapper[4841]: W1204 09:34:42.317186 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212fae43_0697_448b_a100_b87394d70d3b.slice/crio-0d8e71aea4b2c0d07996d505de7c476ff44abf274a92e39b290db90332b2850b WatchSource:0}: Error finding container 0d8e71aea4b2c0d07996d505de7c476ff44abf274a92e39b290db90332b2850b: Status 404 returned error can't find the container with id 0d8e71aea4b2c0d07996d505de7c476ff44abf274a92e39b290db90332b2850b Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.498337 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerStarted","Data":"0d8e71aea4b2c0d07996d505de7c476ff44abf274a92e39b290db90332b2850b"} Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.505521 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" event={"ID":"6f985179-ff60-499f-a53c-4d14ed90c3d6","Type":"ContainerStarted","Data":"3c0edb7d8dccd2ea27ce14878f02e375c08bc9cd54f51cd512fbdf117df39af3"} Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.898992 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-57f5646b69-t4vdd"] Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.899995 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.901903 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-5v58n" Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.919137 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-57f5646b69-t4vdd"] Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.997413 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fphg\" (UniqueName: \"kubernetes.io/projected/d2338232-efab-4584-b317-2ccd0b36eaf2-kube-api-access-6fphg\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:42 crc kubenswrapper[4841]: I1204 09:34:42.997485 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d2338232-efab-4584-b317-2ccd0b36eaf2-runner\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.098626 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fphg\" (UniqueName: \"kubernetes.io/projected/d2338232-efab-4584-b317-2ccd0b36eaf2-kube-api-access-6fphg\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.099129 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d2338232-efab-4584-b317-2ccd0b36eaf2-runner\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.099799 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d2338232-efab-4584-b317-2ccd0b36eaf2-runner\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.123781 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fphg\" (UniqueName: \"kubernetes.io/projected/d2338232-efab-4584-b317-2ccd0b36eaf2-kube-api-access-6fphg\") pod \"service-telemetry-operator-57f5646b69-t4vdd\" (UID: \"d2338232-efab-4584-b317-2ccd0b36eaf2\") " pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.219440 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.519572 4841 generic.go:334] "Generic (PLEG): container finished" podID="212fae43-0697-448b-a100-b87394d70d3b" containerID="9bebefe8c1728c5555498173a0b31fe854716594fb0dbbdf8bcc042acfb4dc59" exitCode=0 Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.519793 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerDied","Data":"9bebefe8c1728c5555498173a0b31fe854716594fb0dbbdf8bcc042acfb4dc59"} Dec 04 09:34:43 crc kubenswrapper[4841]: I1204 09:34:43.763097 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-57f5646b69-t4vdd"] Dec 04 09:34:43 crc kubenswrapper[4841]: W1204 09:34:43.769793 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2338232_efab_4584_b317_2ccd0b36eaf2.slice/crio-c27e1833ff9dd0f327e08c35d8a991156ce3a77f6233e4034c3d5cde154826ea WatchSource:0}: Error finding container c27e1833ff9dd0f327e08c35d8a991156ce3a77f6233e4034c3d5cde154826ea: Status 404 returned error can't find the container with id c27e1833ff9dd0f327e08c35d8a991156ce3a77f6233e4034c3d5cde154826ea Dec 04 09:34:44 crc kubenswrapper[4841]: I1204 09:34:44.531356 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" event={"ID":"d2338232-efab-4584-b317-2ccd0b36eaf2","Type":"ContainerStarted","Data":"c27e1833ff9dd0f327e08c35d8a991156ce3a77f6233e4034c3d5cde154826ea"} Dec 04 09:34:44 crc kubenswrapper[4841]: I1204 09:34:44.538047 4841 generic.go:334] "Generic (PLEG): container finished" podID="212fae43-0697-448b-a100-b87394d70d3b" containerID="551a18fd9b152ebe4db6bee8964554b9138f16afabf5c2e7b73ae0318d670ab6" exitCode=0 Dec 04 09:34:44 crc kubenswrapper[4841]: I1204 09:34:44.538135 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerDied","Data":"551a18fd9b152ebe4db6bee8964554b9138f16afabf5c2e7b73ae0318d670ab6"} Dec 04 09:34:50 crc kubenswrapper[4841]: I1204 09:34:50.497864 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:34:50 crc kubenswrapper[4841]: I1204 09:34:50.498233 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.470700 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.471424 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btjc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vkdp5_openshift-marketplace(212fae43-0697-448b-a100-b87394d70d3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.472594 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/community-operators-vkdp5" podUID="212fae43-0697-448b-a100-b87394d70d3b" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.490616 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.490827 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84x6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-vpp27_service-telemetry(6f985179-ff60-499f-a53c-4d14ed90c3d6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.492324 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" podUID="6f985179-ff60-499f-a53c-4d14ed90c3d6" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.651012 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" podUID="6f985179-ff60-499f-a53c-4d14ed90c3d6" Dec 04 09:35:00 crc kubenswrapper[4841]: E1204 09:35:00.721451 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-vkdp5" podUID="212fae43-0697-448b-a100-b87394d70d3b" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.797821 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7: Get \"https://quay.io/v2/infrawatch/service-telemetry-operator/blobs/sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7\": context canceled" image="quay.io/infrawatch/service-telemetry-operator:latest" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.798797 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:quay.io/infrawatch/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1764801975,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fphg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-57f5646b69-t4vdd_service-telemetry(d2338232-efab-4584-b317-2ccd0b36eaf2): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7: Get \"https://quay.io/v2/infrawatch/service-telemetry-operator/blobs/sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7\": context canceled" logger="UnhandledError" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.800168 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7: Get \\\"https://quay.io/v2/infrawatch/service-telemetry-operator/blobs/sha256:0c76fa21a1a5e8c354e2117d982bdf632c295de11900868cf5fbdaca8fce22f7\\\": context canceled\"" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" podUID="d2338232-efab-4584-b317-2ccd0b36eaf2" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.956168 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.956333 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1764801973,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-595bcf4c87-mp4jt_service-telemetry(2ff86174-988f-4af5-a9cb-2a8a1b6feb5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 09:35:08 crc kubenswrapper[4841]: E1204 09:35:08.957561 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" podUID="2ff86174-988f-4af5-a9cb-2a8a1b6feb5d" Dec 04 09:35:09 crc kubenswrapper[4841]: E1204 09:35:09.708481 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:latest\\\"\"" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" podUID="d2338232-efab-4584-b317-2ccd0b36eaf2" Dec 04 09:35:09 crc kubenswrapper[4841]: E1204 09:35:09.708506 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" podUID="2ff86174-988f-4af5-a9cb-2a8a1b6feb5d" Dec 04 09:35:13 crc kubenswrapper[4841]: I1204 09:35:13.745232 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerStarted","Data":"61cc4ab183e54ac8ef0963e5d67abee1d612b8fb6b152a508cbfea13ccfff9d4"} Dec 04 09:35:13 crc kubenswrapper[4841]: I1204 09:35:13.747004 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" event={"ID":"6f985179-ff60-499f-a53c-4d14ed90c3d6","Type":"ContainerStarted","Data":"8b6bf1d2f2bd167be7ad404c75e010b6db378642dc8ffe58e8d2ffbc2b329d85"} Dec 04 09:35:13 crc kubenswrapper[4841]: I1204 09:35:13.768195 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vkdp5" podStartSLOduration=3.286535777 podStartE2EDuration="32.76815956s" podCreationTimestamp="2025-12-04 09:34:41 +0000 UTC" firstStartedPulling="2025-12-04 09:34:43.52223228 +0000 UTC m=+950.274022484" lastFinishedPulling="2025-12-04 09:35:13.003856063 +0000 UTC m=+979.755646267" observedRunningTime="2025-12-04 09:35:13.764325759 +0000 UTC m=+980.516115963" watchObservedRunningTime="2025-12-04 09:35:13.76815956 +0000 UTC m=+980.519949764" Dec 04 09:35:13 crc kubenswrapper[4841]: I1204 09:35:13.787729 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-vpp27" podStartSLOduration=2.447713373 podStartE2EDuration="32.787698705s" podCreationTimestamp="2025-12-04 09:34:41 +0000 UTC" firstStartedPulling="2025-12-04 09:34:42.287890279 +0000 UTC m=+949.039680483" lastFinishedPulling="2025-12-04 09:35:12.627875591 +0000 UTC m=+979.379665815" observedRunningTime="2025-12-04 09:35:13.779831508 +0000 UTC m=+980.531621722" watchObservedRunningTime="2025-12-04 09:35:13.787698705 +0000 UTC m=+980.539488919" Dec 04 09:35:20 crc kubenswrapper[4841]: I1204 09:35:20.498261 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:35:20 crc kubenswrapper[4841]: I1204 09:35:20.498991 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:35:20 crc kubenswrapper[4841]: I1204 09:35:20.499056 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:35:20 crc kubenswrapper[4841]: I1204 09:35:20.499951 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:35:20 crc kubenswrapper[4841]: I1204 09:35:20.500051 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4" gracePeriod=600 Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.805421 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4" exitCode=0 Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.805517 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4"} Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.806108 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77"} Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.806138 4841 scope.go:117] "RemoveContainer" containerID="dbab89145cc3e3f957a444edc7e520ea73581e21fddeb3e6fa00bb9bfcf2af76" Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.852501 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.852565 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:21 crc kubenswrapper[4841]: I1204 09:35:21.898329 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:22 crc kubenswrapper[4841]: I1204 09:35:22.875532 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:23 crc kubenswrapper[4841]: I1204 09:35:23.825496 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" event={"ID":"d2338232-efab-4584-b317-2ccd0b36eaf2","Type":"ContainerStarted","Data":"0814fece28758e96202b0b17112cb6257f5efb52f177599a4372acc59890fe0e"} Dec 04 09:35:23 crc kubenswrapper[4841]: I1204 09:35:23.855371 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-57f5646b69-t4vdd" podStartSLOduration=2.319655681 podStartE2EDuration="41.855341522s" podCreationTimestamp="2025-12-04 09:34:42 +0000 UTC" firstStartedPulling="2025-12-04 09:34:43.771398628 +0000 UTC m=+950.523188832" lastFinishedPulling="2025-12-04 09:35:23.307084449 +0000 UTC m=+990.058874673" observedRunningTime="2025-12-04 09:35:23.846382959 +0000 UTC m=+990.598173203" watchObservedRunningTime="2025-12-04 09:35:23.855341522 +0000 UTC m=+990.607131776" Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.501246 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.501713 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vkdp5" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="registry-server" containerID="cri-o://61cc4ab183e54ac8ef0963e5d67abee1d612b8fb6b152a508cbfea13ccfff9d4" gracePeriod=2 Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.842650 4841 generic.go:334] "Generic (PLEG): container finished" podID="212fae43-0697-448b-a100-b87394d70d3b" containerID="61cc4ab183e54ac8ef0963e5d67abee1d612b8fb6b152a508cbfea13ccfff9d4" exitCode=0 Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.842734 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerDied","Data":"61cc4ab183e54ac8ef0963e5d67abee1d612b8fb6b152a508cbfea13ccfff9d4"} Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.844821 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" event={"ID":"2ff86174-988f-4af5-a9cb-2a8a1b6feb5d","Type":"ContainerStarted","Data":"4b1557e0e4cba2e43127ce361aa298e95f54b22633bf5aa77a04a4897c05ca3f"} Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.861308 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-595bcf4c87-mp4jt" podStartSLOduration=1.999201486 podStartE2EDuration="46.861289381s" podCreationTimestamp="2025-12-04 09:34:39 +0000 UTC" firstStartedPulling="2025-12-04 09:34:40.234000897 +0000 UTC m=+946.985791101" lastFinishedPulling="2025-12-04 09:35:25.096088792 +0000 UTC m=+991.847878996" observedRunningTime="2025-12-04 09:35:25.859244302 +0000 UTC m=+992.611034516" watchObservedRunningTime="2025-12-04 09:35:25.861289381 +0000 UTC m=+992.613079595" Dec 04 09:35:25 crc kubenswrapper[4841]: I1204 09:35:25.947908 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.041286 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content\") pod \"212fae43-0697-448b-a100-b87394d70d3b\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.041386 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btjc9\" (UniqueName: \"kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9\") pod \"212fae43-0697-448b-a100-b87394d70d3b\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.041479 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities\") pod \"212fae43-0697-448b-a100-b87394d70d3b\" (UID: \"212fae43-0697-448b-a100-b87394d70d3b\") " Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.042324 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities" (OuterVolumeSpecName: "utilities") pod "212fae43-0697-448b-a100-b87394d70d3b" (UID: "212fae43-0697-448b-a100-b87394d70d3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.058104 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9" (OuterVolumeSpecName: "kube-api-access-btjc9") pod "212fae43-0697-448b-a100-b87394d70d3b" (UID: "212fae43-0697-448b-a100-b87394d70d3b"). InnerVolumeSpecName "kube-api-access-btjc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.099738 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "212fae43-0697-448b-a100-b87394d70d3b" (UID: "212fae43-0697-448b-a100-b87394d70d3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.142796 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.142834 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btjc9\" (UniqueName: \"kubernetes.io/projected/212fae43-0697-448b-a100-b87394d70d3b-kube-api-access-btjc9\") on node \"crc\" DevicePath \"\"" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.142867 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212fae43-0697-448b-a100-b87394d70d3b-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.853800 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkdp5" event={"ID":"212fae43-0697-448b-a100-b87394d70d3b","Type":"ContainerDied","Data":"0d8e71aea4b2c0d07996d505de7c476ff44abf274a92e39b290db90332b2850b"} Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.855041 4841 scope.go:117] "RemoveContainer" containerID="61cc4ab183e54ac8ef0963e5d67abee1d612b8fb6b152a508cbfea13ccfff9d4" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.853880 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkdp5" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.873581 4841 scope.go:117] "RemoveContainer" containerID="551a18fd9b152ebe4db6bee8964554b9138f16afabf5c2e7b73ae0318d670ab6" Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.878452 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.881366 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vkdp5"] Dec 04 09:35:26 crc kubenswrapper[4841]: I1204 09:35:26.907255 4841 scope.go:117] "RemoveContainer" containerID="9bebefe8c1728c5555498173a0b31fe854716594fb0dbbdf8bcc042acfb4dc59" Dec 04 09:35:27 crc kubenswrapper[4841]: I1204 09:35:27.631626 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212fae43-0697-448b-a100-b87394d70d3b" path="/var/lib/kubelet/pods/212fae43-0697-448b-a100-b87394d70d3b/volumes" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.988338 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:35:46 crc kubenswrapper[4841]: E1204 09:35:46.989244 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="extract-content" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.989267 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="extract-content" Dec 04 09:35:46 crc kubenswrapper[4841]: E1204 09:35:46.989306 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="registry-server" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.989318 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="registry-server" Dec 04 09:35:46 crc kubenswrapper[4841]: E1204 09:35:46.989338 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="extract-utilities" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.989352 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="extract-utilities" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.989491 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="212fae43-0697-448b-a100-b87394d70d3b" containerName="registry-server" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.990129 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.991938 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.993082 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.993424 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.993652 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.993881 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.994137 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 04 09:35:46 crc kubenswrapper[4841]: I1204 09:35:46.994354 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-22m6r" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.003457 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028397 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028506 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028602 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028644 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028678 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028722 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.028751 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmdd\" (UniqueName: \"kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130217 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130293 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmdd\" (UniqueName: \"kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130335 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130407 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130466 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130503 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.130532 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.131366 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.141716 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.142095 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.142224 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.144158 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.145296 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.151203 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmdd\" (UniqueName: \"kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd\") pod \"default-interconnect-68864d46cb-kqmct\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.353694 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:35:47 crc kubenswrapper[4841]: I1204 09:35:47.601960 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:35:48 crc kubenswrapper[4841]: I1204 09:35:48.026038 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" event={"ID":"c7cfca47-f36a-411b-b206-2e60452f095d","Type":"ContainerStarted","Data":"7801d9bd192d48dce06b5a47fb885a777ba9b7fb3eec2d9b39b5486d8063e9f9"} Dec 04 09:35:53 crc kubenswrapper[4841]: I1204 09:35:53.070660 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" event={"ID":"c7cfca47-f36a-411b-b206-2e60452f095d","Type":"ContainerStarted","Data":"112207dcad702806e1873a0584aea006b9748a8d47c0be0aadab38444d30258a"} Dec 04 09:35:53 crc kubenswrapper[4841]: I1204 09:35:53.094911 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" podStartSLOduration=2.808391469 podStartE2EDuration="7.094893234s" podCreationTimestamp="2025-12-04 09:35:46 +0000 UTC" firstStartedPulling="2025-12-04 09:35:47.595245846 +0000 UTC m=+1014.347036060" lastFinishedPulling="2025-12-04 09:35:51.881747611 +0000 UTC m=+1018.633537825" observedRunningTime="2025-12-04 09:35:53.093758437 +0000 UTC m=+1019.845548661" watchObservedRunningTime="2025-12-04 09:35:53.094893234 +0000 UTC m=+1019.846683448" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.806601 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.808855 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.814007 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.814012 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.814390 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.814480 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-vj945" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.815119 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.816983 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.818026 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.819454 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.828691 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.976534 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.976586 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e4297168-8ecb-4882-8816-ea851a9c21be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4297168-8ecb-4882-8816-ea851a9c21be\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.976613 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.976803 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-web-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.976854 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.977096 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-tls-assets\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.977165 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.977363 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlwt\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-kube-api-access-cxlwt\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.977417 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:56 crc kubenswrapper[4841]: I1204 09:35:56.977500 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config-out\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.078714 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlwt\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-kube-api-access-cxlwt\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.079173 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.079399 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config-out\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.079576 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.079813 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e4297168-8ecb-4882-8816-ea851a9c21be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4297168-8ecb-4882-8816-ea851a9c21be\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: E1204 09:35:57.079850 4841 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 04 09:35:57 crc kubenswrapper[4841]: E1204 09:35:57.080263 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls podName:2f2591b2-4c37-4eb0-afa7-a3d2238b6c03 nodeName:}" failed. No retries permitted until 2025-12-04 09:35:57.58022781 +0000 UTC m=+1024.332018084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "2f2591b2-4c37-4eb0-afa7-a3d2238b6c03") : secret "default-prometheus-proxy-tls" not found Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080141 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080385 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-web-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080413 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080525 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080546 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-tls-assets\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.080606 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.081999 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.087311 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-tls-assets\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.087418 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config-out\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.087446 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.087504 4841 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.087544 4841 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e4297168-8ecb-4882-8816-ea851a9c21be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4297168-8ecb-4882-8816-ea851a9c21be\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5bfbb115d2abf7d9644fe7663eaca36dfe71f66368a509227d61e7f80e1d26bb/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.093875 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.098834 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-web-config\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.120743 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlwt\" (UniqueName: \"kubernetes.io/projected/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-kube-api-access-cxlwt\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.124459 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e4297168-8ecb-4882-8816-ea851a9c21be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e4297168-8ecb-4882-8816-ea851a9c21be\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: I1204 09:35:57.589692 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:57 crc kubenswrapper[4841]: E1204 09:35:57.589967 4841 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 04 09:35:57 crc kubenswrapper[4841]: E1204 09:35:57.590416 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls podName:2f2591b2-4c37-4eb0-afa7-a3d2238b6c03 nodeName:}" failed. No retries permitted until 2025-12-04 09:35:58.590386306 +0000 UTC m=+1025.342176550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "2f2591b2-4c37-4eb0-afa7-a3d2238b6c03") : secret "default-prometheus-proxy-tls" not found Dec 04 09:35:58 crc kubenswrapper[4841]: I1204 09:35:58.605318 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:58 crc kubenswrapper[4841]: I1204 09:35:58.618199 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2591b2-4c37-4eb0-afa7-a3d2238b6c03-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03\") " pod="service-telemetry/prometheus-default-0" Dec 04 09:35:58 crc kubenswrapper[4841]: I1204 09:35:58.650596 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 04 09:35:58 crc kubenswrapper[4841]: I1204 09:35:58.901068 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 04 09:35:59 crc kubenswrapper[4841]: I1204 09:35:59.119404 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerStarted","Data":"80a22440c39ff6b8b758582f2e940f46e8a0602fa6c25b20e930977acf5b783f"} Dec 04 09:36:04 crc kubenswrapper[4841]: I1204 09:36:04.154481 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerStarted","Data":"1ea780c18228608ea0a64173398bd0e952fedb7e5900ebff7e2993d78b935f51"} Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.450122 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25"] Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.452871 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.464811 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25"] Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.639800 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbfv\" (UniqueName: \"kubernetes.io/projected/52131b4e-c5ec-483e-8288-12a5fc2d9897-kube-api-access-gnbfv\") pod \"default-snmp-webhook-78bcbbdcff-xfd25\" (UID: \"52131b4e-c5ec-483e-8288-12a5fc2d9897\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.741273 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbfv\" (UniqueName: \"kubernetes.io/projected/52131b4e-c5ec-483e-8288-12a5fc2d9897-kube-api-access-gnbfv\") pod \"default-snmp-webhook-78bcbbdcff-xfd25\" (UID: \"52131b4e-c5ec-483e-8288-12a5fc2d9897\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.772083 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbfv\" (UniqueName: \"kubernetes.io/projected/52131b4e-c5ec-483e-8288-12a5fc2d9897-kube-api-access-gnbfv\") pod \"default-snmp-webhook-78bcbbdcff-xfd25\" (UID: \"52131b4e-c5ec-483e-8288-12a5fc2d9897\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" Dec 04 09:36:06 crc kubenswrapper[4841]: I1204 09:36:06.782525 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" Dec 04 09:36:07 crc kubenswrapper[4841]: I1204 09:36:07.051043 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25"] Dec 04 09:36:07 crc kubenswrapper[4841]: I1204 09:36:07.065703 4841 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 09:36:07 crc kubenswrapper[4841]: I1204 09:36:07.176903 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" event={"ID":"52131b4e-c5ec-483e-8288-12a5fc2d9897","Type":"ContainerStarted","Data":"9a5eba60f988f7fa22daecab06d4297275fbaf9e82a7977c81e2b011978cd162"} Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.063961 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.066960 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.069503 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.069642 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.070097 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.070115 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.070169 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-mhp7j" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.070927 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.078049 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210236 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210293 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-web-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210349 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210375 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210410 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9nkm\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-kube-api-access-k9nkm\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210446 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-config-volume\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210572 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-tls-assets\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210623 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.210651 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ddaade57-6ede-426a-b388-c1351af31426-config-out\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312303 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312370 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-web-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312425 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312455 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312490 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9nkm\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-kube-api-access-k9nkm\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312532 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-config-volume\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312574 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-tls-assets\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.312808 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ddaade57-6ede-426a-b388-c1351af31426-config-out\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: E1204 09:36:10.312807 4841 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:10 crc kubenswrapper[4841]: E1204 09:36:10.312936 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls podName:ddaade57-6ede-426a-b388-c1351af31426 nodeName:}" failed. No retries permitted until 2025-12-04 09:36:10.812900372 +0000 UTC m=+1037.564690646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "ddaade57-6ede-426a-b388-c1351af31426") : secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.321058 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-web-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.325564 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-tls-assets\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.326113 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ddaade57-6ede-426a-b388-c1351af31426-config-out\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.330396 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.330497 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-config-volume\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.349937 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.368986 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9nkm\" (UniqueName: \"kubernetes.io/projected/ddaade57-6ede-426a-b388-c1351af31426-kube-api-access-k9nkm\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.389075 4841 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.389123 4841 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e584652e1759762879d232df1e42e552a4360bde9069a4082b568fc593d6d23f/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.484953 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab67035-baf2-46d8-8bb4-e889f80b0932\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: I1204 09:36:10.820570 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:10 crc kubenswrapper[4841]: E1204 09:36:10.820843 4841 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:10 crc kubenswrapper[4841]: E1204 09:36:10.820965 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls podName:ddaade57-6ede-426a-b388-c1351af31426 nodeName:}" failed. No retries permitted until 2025-12-04 09:36:11.820937558 +0000 UTC m=+1038.572727792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "ddaade57-6ede-426a-b388-c1351af31426") : secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:11 crc kubenswrapper[4841]: I1204 09:36:11.835834 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:11 crc kubenswrapper[4841]: E1204 09:36:11.836030 4841 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:11 crc kubenswrapper[4841]: E1204 09:36:11.836105 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls podName:ddaade57-6ede-426a-b388-c1351af31426 nodeName:}" failed. No retries permitted until 2025-12-04 09:36:13.836085994 +0000 UTC m=+1040.587876208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "ddaade57-6ede-426a-b388-c1351af31426") : secret "default-alertmanager-proxy-tls" not found Dec 04 09:36:12 crc kubenswrapper[4841]: I1204 09:36:12.218961 4841 generic.go:334] "Generic (PLEG): container finished" podID="2f2591b2-4c37-4eb0-afa7-a3d2238b6c03" containerID="1ea780c18228608ea0a64173398bd0e952fedb7e5900ebff7e2993d78b935f51" exitCode=0 Dec 04 09:36:12 crc kubenswrapper[4841]: I1204 09:36:12.219023 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerDied","Data":"1ea780c18228608ea0a64173398bd0e952fedb7e5900ebff7e2993d78b935f51"} Dec 04 09:36:13 crc kubenswrapper[4841]: I1204 09:36:13.872973 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:13 crc kubenswrapper[4841]: I1204 09:36:13.881246 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ddaade57-6ede-426a-b388-c1351af31426-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"ddaade57-6ede-426a-b388-c1351af31426\") " pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:14 crc kubenswrapper[4841]: I1204 09:36:14.018477 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 04 09:36:14 crc kubenswrapper[4841]: I1204 09:36:14.455188 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 04 09:36:15 crc kubenswrapper[4841]: I1204 09:36:15.240169 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerStarted","Data":"7bac556eb29633413b9a9e046a2bc69ca0c44b4a2a0ad109c1775c0205e40d20"} Dec 04 09:36:16 crc kubenswrapper[4841]: I1204 09:36:16.248064 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerStarted","Data":"a4a23edfccf97cc4731876ff3509fcd811a5705f7e98328b0828b113e4704aa1"} Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.861619 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j"] Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.864009 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.867317 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.867425 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-lhgd6" Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.868113 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.868293 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 04 09:36:22 crc kubenswrapper[4841]: I1204 09:36:22.885857 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j"] Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.002310 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnrc\" (UniqueName: \"kubernetes.io/projected/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-kube-api-access-kjnrc\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.002582 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.002737 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.002838 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.002913 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.104360 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.104410 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.104433 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.104473 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnrc\" (UniqueName: \"kubernetes.io/projected/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-kube-api-access-kjnrc\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.104507 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.105288 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.105570 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: E1204 09:36:23.106389 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 04 09:36:23 crc kubenswrapper[4841]: E1204 09:36:23.106443 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls podName:b2c333ba-eeaf-493b-9e70-f3a0d2129d7c nodeName:}" failed. No retries permitted until 2025-12-04 09:36:23.606428281 +0000 UTC m=+1050.358218485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" (UID: "b2c333ba-eeaf-493b-9e70-f3a0d2129d7c") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.126938 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.137715 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnrc\" (UniqueName: \"kubernetes.io/projected/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-kube-api-access-kjnrc\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.303640 4841 generic.go:334] "Generic (PLEG): container finished" podID="ddaade57-6ede-426a-b388-c1351af31426" containerID="a4a23edfccf97cc4731876ff3509fcd811a5705f7e98328b0828b113e4704aa1" exitCode=0 Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.303701 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerDied","Data":"a4a23edfccf97cc4731876ff3509fcd811a5705f7e98328b0828b113e4704aa1"} Dec 04 09:36:23 crc kubenswrapper[4841]: I1204 09:36:23.610888 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:23 crc kubenswrapper[4841]: E1204 09:36:23.611040 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 04 09:36:23 crc kubenswrapper[4841]: E1204 09:36:23.611211 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls podName:b2c333ba-eeaf-493b-9e70-f3a0d2129d7c nodeName:}" failed. No retries permitted until 2025-12-04 09:36:24.611194151 +0000 UTC m=+1051.362984355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" (UID: "b2c333ba-eeaf-493b-9e70-f3a0d2129d7c") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 04 09:36:24 crc kubenswrapper[4841]: I1204 09:36:24.626116 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:24 crc kubenswrapper[4841]: I1204 09:36:24.646679 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c333ba-eeaf-493b-9e70-f3a0d2129d7c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j\" (UID: \"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:24 crc kubenswrapper[4841]: I1204 09:36:24.701401 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.378672 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk"] Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.380158 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.382634 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.382679 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.397773 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk"] Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.542014 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.542058 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d13cd4c4-469b-444a-b482-4bcb88d1721e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.542104 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.542122 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvmm\" (UniqueName: \"kubernetes.io/projected/d13cd4c4-469b-444a-b482-4bcb88d1721e-kube-api-access-jpvmm\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.542168 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d13cd4c4-469b-444a-b482-4bcb88d1721e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.643685 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d13cd4c4-469b-444a-b482-4bcb88d1721e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.643820 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.643857 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d13cd4c4-469b-444a-b482-4bcb88d1721e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.643920 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.643946 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvmm\" (UniqueName: \"kubernetes.io/projected/d13cd4c4-469b-444a-b482-4bcb88d1721e-kube-api-access-jpvmm\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.644752 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d13cd4c4-469b-444a-b482-4bcb88d1721e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: E1204 09:36:25.644866 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.644885 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d13cd4c4-469b-444a-b482-4bcb88d1721e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: E1204 09:36:25.644917 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls podName:d13cd4c4-469b-444a-b482-4bcb88d1721e nodeName:}" failed. No retries permitted until 2025-12-04 09:36:26.14489714 +0000 UTC m=+1052.896687354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" (UID: "d13cd4c4-469b-444a-b482-4bcb88d1721e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.647968 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:25 crc kubenswrapper[4841]: I1204 09:36:25.663315 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvmm\" (UniqueName: \"kubernetes.io/projected/d13cd4c4-469b-444a-b482-4bcb88d1721e-kube-api-access-jpvmm\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.109945 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/prometheus-webhook-snmp:latest" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.110119 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-webhook-snmp,Image:quay.io/infrawatch/prometheus-webhook-snmp:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9099,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:SNMP_COMMUNITY,Value:public,ValueFrom:nil,},EnvVar{Name:SNMP_RETRIES,Value:5,ValueFrom:nil,},EnvVar{Name:SNMP_HOST,Value:192.168.24.254,ValueFrom:nil,},EnvVar{Name:SNMP_PORT,Value:162,ValueFrom:nil,},EnvVar{Name:SNMP_TIMEOUT,Value:1,ValueFrom:nil,},EnvVar{Name:ALERT_OID_LABEL,Value:oid,ValueFrom:nil,},EnvVar{Name:TRAP_OID_PREFIX,Value:1.3.6.1.4.1.50495.15,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_OID,Value:1.3.6.1.4.1.50495.15.1.2.1,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_SEVERITY,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gnbfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-snmp-webhook-78bcbbdcff-xfd25_service-telemetry(52131b4e-c5ec-483e-8288-12a5fc2d9897): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.111285 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" podUID="52131b4e-c5ec-483e-8288-12a5fc2d9897" Dec 04 09:36:26 crc kubenswrapper[4841]: I1204 09:36:26.151634 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.151865 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.151962 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls podName:d13cd4c4-469b-444a-b482-4bcb88d1721e nodeName:}" failed. No retries permitted until 2025-12-04 09:36:27.151939794 +0000 UTC m=+1053.903730008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" (UID: "d13cd4c4-469b-444a-b482-4bcb88d1721e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.354608 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/prometheus-webhook-snmp:latest\\\"\"" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" podUID="52131b4e-c5ec-483e-8288-12a5fc2d9897" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.368719 4841 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/prometheus:latest" Dec 04 09:36:26 crc kubenswrapper[4841]: E1204 09:36:26.369295 4841 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:quay.io/prometheus/prometheus:latest,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-prometheus-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:configmap-serving-certs-ca-bundle,ReadOnly:true,MountPath:/etc/prometheus/configmaps/serving-certs-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxlwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(2f2591b2-4c37-4eb0-afa7-a3d2238b6c03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 09:36:26 crc kubenswrapper[4841]: I1204 09:36:26.752424 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j"] Dec 04 09:36:27 crc kubenswrapper[4841]: I1204 09:36:27.178541 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:27 crc kubenswrapper[4841]: I1204 09:36:27.192974 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d13cd4c4-469b-444a-b482-4bcb88d1721e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk\" (UID: \"d13cd4c4-469b-444a-b482-4bcb88d1721e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:27 crc kubenswrapper[4841]: I1204 09:36:27.200747 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" Dec 04 09:36:27 crc kubenswrapper[4841]: I1204 09:36:27.344103 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"82d6c4358291737228e5e4929fafa31328e8c624fa8bf717b22b31e532427a13"} Dec 04 09:36:28 crc kubenswrapper[4841]: I1204 09:36:28.275710 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk"] Dec 04 09:36:28 crc kubenswrapper[4841]: W1204 09:36:28.299719 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13cd4c4_469b_444a_b482_4bcb88d1721e.slice/crio-b67c7147e5f12570adc88e932f2102831342963dc5f02221bf8717cf4bafd291 WatchSource:0}: Error finding container b67c7147e5f12570adc88e932f2102831342963dc5f02221bf8717cf4bafd291: Status 404 returned error can't find the container with id b67c7147e5f12570adc88e932f2102831342963dc5f02221bf8717cf4bafd291 Dec 04 09:36:28 crc kubenswrapper[4841]: I1204 09:36:28.353249 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerStarted","Data":"cd6d2460c366fd5145d316abfbcf9494bd8a416c300693e49759b917f110b279"} Dec 04 09:36:28 crc kubenswrapper[4841]: I1204 09:36:28.354393 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"b67c7147e5f12570adc88e932f2102831342963dc5f02221bf8717cf4bafd291"} Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.364116 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerStarted","Data":"fcf9b09b237ef694a5fdc6c74440f5bc47360ac5cd524db45a0b5d1f0a0ffc25"} Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.505706 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll"] Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.508028 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.510434 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.510652 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.513511 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll"] Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.617433 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/29138a71-c959-4de4-8fc6-67573c77f301-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.617498 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/29138a71-c959-4de4-8fc6-67573c77f301-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.617522 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.617587 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.617615 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6vq\" (UniqueName: \"kubernetes.io/projected/29138a71-c959-4de4-8fc6-67573c77f301-kube-api-access-5n6vq\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719289 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/29138a71-c959-4de4-8fc6-67573c77f301-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719349 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/29138a71-c959-4de4-8fc6-67573c77f301-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719369 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719391 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719412 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6vq\" (UniqueName: \"kubernetes.io/projected/29138a71-c959-4de4-8fc6-67573c77f301-kube-api-access-5n6vq\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: E1204 09:36:29.719659 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 04 09:36:29 crc kubenswrapper[4841]: E1204 09:36:29.719816 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls podName:29138a71-c959-4de4-8fc6-67573c77f301 nodeName:}" failed. No retries permitted until 2025-12-04 09:36:30.219779607 +0000 UTC m=+1056.971569811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" (UID: "29138a71-c959-4de4-8fc6-67573c77f301") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.719982 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/29138a71-c959-4de4-8fc6-67573c77f301-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.720998 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/29138a71-c959-4de4-8fc6-67573c77f301-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.739260 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:29 crc kubenswrapper[4841]: I1204 09:36:29.739410 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6vq\" (UniqueName: \"kubernetes.io/projected/29138a71-c959-4de4-8fc6-67573c77f301-kube-api-access-5n6vq\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:30 crc kubenswrapper[4841]: I1204 09:36:30.227798 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:30 crc kubenswrapper[4841]: E1204 09:36:30.227926 4841 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 04 09:36:30 crc kubenswrapper[4841]: E1204 09:36:30.228063 4841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls podName:29138a71-c959-4de4-8fc6-67573c77f301 nodeName:}" failed. No retries permitted until 2025-12-04 09:36:31.228047911 +0000 UTC m=+1057.979838115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" (UID: "29138a71-c959-4de4-8fc6-67573c77f301") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 04 09:36:30 crc kubenswrapper[4841]: I1204 09:36:30.380551 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerStarted","Data":"80251cd3e94545e3dd90ada2a46abc4fb87359ff485b7204a8b7573a2a3e9163"} Dec 04 09:36:31 crc kubenswrapper[4841]: I1204 09:36:31.241163 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:31 crc kubenswrapper[4841]: I1204 09:36:31.246209 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/29138a71-c959-4de4-8fc6-67573c77f301-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll\" (UID: \"29138a71-c959-4de4-8fc6-67573c77f301\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:31 crc kubenswrapper[4841]: I1204 09:36:31.335012 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.531158 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk"] Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.532619 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.537570 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.537592 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.538193 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk"] Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.606104 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8175b165-39c3-489d-84e0-94fc420e7b87-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.606150 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8175b165-39c3-489d-84e0-94fc420e7b87-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.606170 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwdr\" (UniqueName: \"kubernetes.io/projected/8175b165-39c3-489d-84e0-94fc420e7b87-kube-api-access-rpwdr\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.606250 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8175b165-39c3-489d-84e0-94fc420e7b87-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.706999 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8175b165-39c3-489d-84e0-94fc420e7b87-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.707804 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8175b165-39c3-489d-84e0-94fc420e7b87-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.708211 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8175b165-39c3-489d-84e0-94fc420e7b87-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.708285 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8175b165-39c3-489d-84e0-94fc420e7b87-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.708313 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpwdr\" (UniqueName: \"kubernetes.io/projected/8175b165-39c3-489d-84e0-94fc420e7b87-kube-api-access-rpwdr\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.708592 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8175b165-39c3-489d-84e0-94fc420e7b87-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.716351 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8175b165-39c3-489d-84e0-94fc420e7b87-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.734045 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpwdr\" (UniqueName: \"kubernetes.io/projected/8175b165-39c3-489d-84e0-94fc420e7b87-kube-api-access-rpwdr\") pod \"default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk\" (UID: \"8175b165-39c3-489d-84e0-94fc420e7b87\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: E1204 09:36:35.856693 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="2f2591b2-4c37-4eb0-afa7-a3d2238b6c03" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.891414 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" Dec 04 09:36:35 crc kubenswrapper[4841]: I1204 09:36:35.901729 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll"] Dec 04 09:36:35 crc kubenswrapper[4841]: W1204 09:36:35.910166 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29138a71_c959_4de4_8fc6_67573c77f301.slice/crio-3656d73dd756fc99f043d2a5fa2497965e68b211388f316ed3f8409a1cdb4912 WatchSource:0}: Error finding container 3656d73dd756fc99f043d2a5fa2497965e68b211388f316ed3f8409a1cdb4912: Status 404 returned error can't find the container with id 3656d73dd756fc99f043d2a5fa2497965e68b211388f316ed3f8409a1cdb4912 Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.317169 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk"] Dec 04 09:36:36 crc kubenswrapper[4841]: W1204 09:36:36.319937 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8175b165_39c3_489d_84e0_94fc420e7b87.slice/crio-fe5c1ff8bc9cdeda5b399fbd295cf0c251daa6f81f7065fa4a4e51a6fdc46906 WatchSource:0}: Error finding container fe5c1ff8bc9cdeda5b399fbd295cf0c251daa6f81f7065fa4a4e51a6fdc46906: Status 404 returned error can't find the container with id fe5c1ff8bc9cdeda5b399fbd295cf0c251daa6f81f7065fa4a4e51a6fdc46906 Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.421454 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerStarted","Data":"fe5c1ff8bc9cdeda5b399fbd295cf0c251daa6f81f7065fa4a4e51a6fdc46906"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.422353 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"3656d73dd756fc99f043d2a5fa2497965e68b211388f316ed3f8409a1cdb4912"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.428374 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerStarted","Data":"dc3fc080f19204a64d2c6a7971f77392f848de583d0f9f22c57076a2a1465c99"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.439512 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"ddaade57-6ede-426a-b388-c1351af31426","Type":"ContainerStarted","Data":"3417bdf9d9f10bd7e9718af5e9f849848d9c3624b6f66cc2ec99909af5eb66a7"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.441955 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"1ebb9362c75c1a079d2687a6a3709cb749f9399493418b8d3af1f9ba574f644c"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.444675 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"a7606a0ff23bd0b062c7b8bc8250bfaf1e6b6b3efcb22b01d77e857194013029"} Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.504184 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=15.105173962 podStartE2EDuration="27.504166801s" podCreationTimestamp="2025-12-04 09:36:09 +0000 UTC" firstStartedPulling="2025-12-04 09:36:23.307464246 +0000 UTC m=+1050.059254450" lastFinishedPulling="2025-12-04 09:36:35.706457085 +0000 UTC m=+1062.458247289" observedRunningTime="2025-12-04 09:36:36.500519243 +0000 UTC m=+1063.252309477" watchObservedRunningTime="2025-12-04 09:36:36.504166801 +0000 UTC m=+1063.255957005" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.827509 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm"] Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.828795 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.832645 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.842840 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm"] Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.936903 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh85d\" (UniqueName: \"kubernetes.io/projected/eb0cf76d-24b8-4be7-9161-58151b914f39-kube-api-access-fh85d\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.937052 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eb0cf76d-24b8-4be7-9161-58151b914f39-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.937232 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eb0cf76d-24b8-4be7-9161-58151b914f39-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:36 crc kubenswrapper[4841]: I1204 09:36:36.937472 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eb0cf76d-24b8-4be7-9161-58151b914f39-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.038362 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eb0cf76d-24b8-4be7-9161-58151b914f39-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.038661 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eb0cf76d-24b8-4be7-9161-58151b914f39-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.038707 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh85d\" (UniqueName: \"kubernetes.io/projected/eb0cf76d-24b8-4be7-9161-58151b914f39-kube-api-access-fh85d\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.038749 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eb0cf76d-24b8-4be7-9161-58151b914f39-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.040249 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eb0cf76d-24b8-4be7-9161-58151b914f39-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.040660 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eb0cf76d-24b8-4be7-9161-58151b914f39-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.045538 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eb0cf76d-24b8-4be7-9161-58151b914f39-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.056443 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh85d\" (UniqueName: \"kubernetes.io/projected/eb0cf76d-24b8-4be7-9161-58151b914f39-kube-api-access-fh85d\") pod \"default-cloud1-ceil-event-smartgateway-587c778df-zj5mm\" (UID: \"eb0cf76d-24b8-4be7-9161-58151b914f39\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.158638 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.452863 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"1f5d67f888fba83459aefe166ce2e05091f59847d11fa006aa691b0add47c68e"} Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.478032 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"2f2591b2-4c37-4eb0-afa7-a3d2238b6c03","Type":"ContainerStarted","Data":"be19a78d77ff6140a4de16992db9a6897c7bcb7123c0a6342bf0dfde4b52b4d0"} Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.498252 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.441891222 podStartE2EDuration="42.49823424s" podCreationTimestamp="2025-12-04 09:35:55 +0000 UTC" firstStartedPulling="2025-12-04 09:35:58.914508511 +0000 UTC m=+1025.666298715" lastFinishedPulling="2025-12-04 09:36:36.970851529 +0000 UTC m=+1063.722641733" observedRunningTime="2025-12-04 09:36:37.495012693 +0000 UTC m=+1064.246802897" watchObservedRunningTime="2025-12-04 09:36:37.49823424 +0000 UTC m=+1064.250024444" Dec 04 09:36:37 crc kubenswrapper[4841]: I1204 09:36:37.593321 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm"] Dec 04 09:36:38 crc kubenswrapper[4841]: I1204 09:36:38.484667 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerStarted","Data":"c670f8393183f76afe024686813323cd2576485fb92fd44b2ee6995c95eb79d1"} Dec 04 09:36:38 crc kubenswrapper[4841]: I1204 09:36:38.661690 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 04 09:36:39 crc kubenswrapper[4841]: I1204 09:36:39.494910 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" event={"ID":"52131b4e-c5ec-483e-8288-12a5fc2d9897","Type":"ContainerStarted","Data":"d6126e0eb0c05cd555bee3d3c8921ad59c8cff2f60bf5ec1a2ccc169618a940f"} Dec 04 09:36:39 crc kubenswrapper[4841]: I1204 09:36:39.518563 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-xfd25" podStartSLOduration=1.400187531 podStartE2EDuration="33.518544369s" podCreationTimestamp="2025-12-04 09:36:06 +0000 UTC" firstStartedPulling="2025-12-04 09:36:07.065499925 +0000 UTC m=+1033.817290129" lastFinishedPulling="2025-12-04 09:36:39.183856763 +0000 UTC m=+1065.935646967" observedRunningTime="2025-12-04 09:36:39.508082128 +0000 UTC m=+1066.259872332" watchObservedRunningTime="2025-12-04 09:36:39.518544369 +0000 UTC m=+1066.270334573" Dec 04 09:36:43 crc kubenswrapper[4841]: I1204 09:36:43.650833 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 04 09:36:43 crc kubenswrapper[4841]: I1204 09:36:43.740128 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.534966 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"66de7674cd806963dd229b1ca3a49398bc90a61bb77c16a38ac4baea04766e7f"} Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.536946 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerStarted","Data":"0ee581510ab4f63337ae952627fee62a1d51443851fb9b2e90f908e2aa5f7d2a"} Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.538727 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerStarted","Data":"a0b369b4efff86cb491c5a0b17ae0e067fc9ee60b412ac7545d6600b343e6a83"} Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.540614 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"4ee7784734a94fdcfffea2dba80ae7233e5210a95eac4c435c69dbd7205a50aa"} Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.543066 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"4723620fb1d58dd58310b11dbd488332512b951df0f07c5f81d939e9f47b2ae2"} Dec 04 09:36:44 crc kubenswrapper[4841]: I1204 09:36:44.594335 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 04 09:36:48 crc kubenswrapper[4841]: I1204 09:36:48.254442 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:36:48 crc kubenswrapper[4841]: I1204 09:36:48.255348 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" podUID="c7cfca47-f36a-411b-b206-2e60452f095d" containerName="default-interconnect" containerID="cri-o://112207dcad702806e1873a0584aea006b9748a8d47c0be0aadab38444d30258a" gracePeriod=30 Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.595115 4841 generic.go:334] "Generic (PLEG): container finished" podID="d13cd4c4-469b-444a-b482-4bcb88d1721e" containerID="66de7674cd806963dd229b1ca3a49398bc90a61bb77c16a38ac4baea04766e7f" exitCode=0 Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.595236 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerDied","Data":"66de7674cd806963dd229b1ca3a49398bc90a61bb77c16a38ac4baea04766e7f"} Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.600611 4841 generic.go:334] "Generic (PLEG): container finished" podID="c7cfca47-f36a-411b-b206-2e60452f095d" containerID="112207dcad702806e1873a0584aea006b9748a8d47c0be0aadab38444d30258a" exitCode=0 Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.600862 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" event={"ID":"c7cfca47-f36a-411b-b206-2e60452f095d","Type":"ContainerDied","Data":"112207dcad702806e1873a0584aea006b9748a8d47c0be0aadab38444d30258a"} Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.603250 4841 generic.go:334] "Generic (PLEG): container finished" podID="8175b165-39c3-489d-84e0-94fc420e7b87" containerID="a0b369b4efff86cb491c5a0b17ae0e067fc9ee60b412ac7545d6600b343e6a83" exitCode=0 Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.603311 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerDied","Data":"a0b369b4efff86cb491c5a0b17ae0e067fc9ee60b412ac7545d6600b343e6a83"} Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.606152 4841 generic.go:334] "Generic (PLEG): container finished" podID="29138a71-c959-4de4-8fc6-67573c77f301" containerID="4ee7784734a94fdcfffea2dba80ae7233e5210a95eac4c435c69dbd7205a50aa" exitCode=0 Dec 04 09:36:50 crc kubenswrapper[4841]: I1204 09:36:50.606207 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerDied","Data":"4ee7784734a94fdcfffea2dba80ae7233e5210a95eac4c435c69dbd7205a50aa"} Dec 04 09:36:51 crc kubenswrapper[4841]: I1204 09:36:51.620370 4841 generic.go:334] "Generic (PLEG): container finished" podID="eb0cf76d-24b8-4be7-9161-58151b914f39" containerID="0ee581510ab4f63337ae952627fee62a1d51443851fb9b2e90f908e2aa5f7d2a" exitCode=0 Dec 04 09:36:51 crc kubenswrapper[4841]: I1204 09:36:51.623052 4841 generic.go:334] "Generic (PLEG): container finished" podID="b2c333ba-eeaf-493b-9e70-f3a0d2129d7c" containerID="4723620fb1d58dd58310b11dbd488332512b951df0f07c5f81d939e9f47b2ae2" exitCode=0 Dec 04 09:36:51 crc kubenswrapper[4841]: I1204 09:36:51.631000 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerDied","Data":"0ee581510ab4f63337ae952627fee62a1d51443851fb9b2e90f908e2aa5f7d2a"} Dec 04 09:36:51 crc kubenswrapper[4841]: I1204 09:36:51.631053 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerDied","Data":"4723620fb1d58dd58310b11dbd488332512b951df0f07c5f81d939e9f47b2ae2"} Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.672426 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" event={"ID":"c7cfca47-f36a-411b-b206-2e60452f095d","Type":"ContainerDied","Data":"7801d9bd192d48dce06b5a47fb885a777ba9b7fb3eec2d9b39b5486d8063e9f9"} Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.673218 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7801d9bd192d48dce06b5a47fb885a777ba9b7fb3eec2d9b39b5486d8063e9f9" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.696790 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.740927 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-68c58"] Dec 04 09:36:54 crc kubenswrapper[4841]: E1204 09:36:54.741568 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cfca47-f36a-411b-b206-2e60452f095d" containerName="default-interconnect" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.741585 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cfca47-f36a-411b-b206-2e60452f095d" containerName="default-interconnect" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.741753 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cfca47-f36a-411b-b206-2e60452f095d" containerName="default-interconnect" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.742324 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.746867 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-68c58"] Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835638 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835719 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835755 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835803 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835829 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835854 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmdd\" (UniqueName: \"kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.835900 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users\") pod \"c7cfca47-f36a-411b-b206-2e60452f095d\" (UID: \"c7cfca47-f36a-411b-b206-2e60452f095d\") " Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.837694 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.841519 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.841705 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.841770 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd" (OuterVolumeSpecName: "kube-api-access-2vmdd") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "kube-api-access-2vmdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.841910 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.842267 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.844065 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "c7cfca47-f36a-411b-b206-2e60452f095d" (UID: "c7cfca47-f36a-411b-b206-2e60452f095d"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937441 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937721 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937779 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937826 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-config\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937865 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.937885 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2hv\" (UniqueName: \"kubernetes.io/projected/c7f39520-aec3-412e-b0fe-358e97d00b51-kube-api-access-lc2hv\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938039 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-users\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938182 4841 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938226 4841 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938255 4841 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938284 4841 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938310 4841 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938335 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmdd\" (UniqueName: \"kubernetes.io/projected/c7cfca47-f36a-411b-b206-2e60452f095d-kube-api-access-2vmdd\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:54 crc kubenswrapper[4841]: I1204 09:36:54.938360 4841 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7cfca47-f36a-411b-b206-2e60452f095d-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039125 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-config\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039185 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039211 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2hv\" (UniqueName: \"kubernetes.io/projected/c7f39520-aec3-412e-b0fe-358e97d00b51-kube-api-access-lc2hv\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039252 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-users\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039286 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039321 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.039361 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.041696 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-config\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.044263 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.045503 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.046234 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-sasl-users\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.048386 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.049332 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7f39520-aec3-412e-b0fe-358e97d00b51-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.057477 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2hv\" (UniqueName: \"kubernetes.io/projected/c7f39520-aec3-412e-b0fe-358e97d00b51-kube-api-access-lc2hv\") pod \"default-interconnect-68864d46cb-68c58\" (UID: \"c7f39520-aec3-412e-b0fe-358e97d00b51\") " pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.090301 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-68c58" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.577087 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-68c58"] Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.681506 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"1b54d521fdfd85170e7fb49e0d08967e924f119191070d4472b362f2787ac10b"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.682284 4841 scope.go:117] "RemoveContainer" containerID="4723620fb1d58dd58310b11dbd488332512b951df0f07c5f81d939e9f47b2ae2" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.684512 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"0742b46b09541b8eaf5b2a20bd829f3eb6134992b6bfcbd9edca5c96b15d5cab"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.685106 4841 scope.go:117] "RemoveContainer" containerID="66de7674cd806963dd229b1ca3a49398bc90a61bb77c16a38ac4baea04766e7f" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.686660 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-68c58" event={"ID":"c7f39520-aec3-412e-b0fe-358e97d00b51","Type":"ContainerStarted","Data":"f489786f602f962d94cab7d70a77d99472765338316ecbda70875b9c062ec8e0"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.692328 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerStarted","Data":"86dc24c54607dd68487985771029e2849393bc8bce3478ef151d7317df10d7ea"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.693273 4841 scope.go:117] "RemoveContainer" containerID="0ee581510ab4f63337ae952627fee62a1d51443851fb9b2e90f908e2aa5f7d2a" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.700460 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerStarted","Data":"f89d2bc5932d1aa8a87feb74d2dfefd10255b670e9a571af0158ecb17573bc82"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.701198 4841 scope.go:117] "RemoveContainer" containerID="a0b369b4efff86cb491c5a0b17ae0e067fc9ee60b412ac7545d6600b343e6a83" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.705520 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-kqmct" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.706015 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"6cdd43a868041a20211b337f0930fed7b7899186e62447d99594999bc51b3c1a"} Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.706165 4841 scope.go:117] "RemoveContainer" containerID="4ee7784734a94fdcfffea2dba80ae7233e5210a95eac4c435c69dbd7205a50aa" Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.775466 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:36:55 crc kubenswrapper[4841]: I1204 09:36:55.788025 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-kqmct"] Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.714928 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerStarted","Data":"2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.723287 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.728442 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerStarted","Data":"401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.735021 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.736022 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" podStartSLOduration=1.926081197 podStartE2EDuration="21.736003736s" podCreationTimestamp="2025-12-04 09:36:35 +0000 UTC" firstStartedPulling="2025-12-04 09:36:36.322701895 +0000 UTC m=+1063.074492109" lastFinishedPulling="2025-12-04 09:36:56.132624444 +0000 UTC m=+1082.884414648" observedRunningTime="2025-12-04 09:36:56.728848795 +0000 UTC m=+1083.480639009" watchObservedRunningTime="2025-12-04 09:36:56.736003736 +0000 UTC m=+1083.487793970" Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.738667 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.742151 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-68c58" event={"ID":"c7f39520-aec3-412e-b0fe-358e97d00b51","Type":"ContainerStarted","Data":"f4b27da9647a2373d3f62a33b293778b2bb9c8ea19387aecf0e8b973e04660b8"} Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.755598 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" podStartSLOduration=2.208150123 podStartE2EDuration="20.755571264s" podCreationTimestamp="2025-12-04 09:36:36 +0000 UTC" firstStartedPulling="2025-12-04 09:36:37.619955626 +0000 UTC m=+1064.371745830" lastFinishedPulling="2025-12-04 09:36:56.167376767 +0000 UTC m=+1082.919166971" observedRunningTime="2025-12-04 09:36:56.743526206 +0000 UTC m=+1083.495316410" watchObservedRunningTime="2025-12-04 09:36:56.755571264 +0000 UTC m=+1083.507361498" Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.783475 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" podStartSLOduration=7.531163398 podStartE2EDuration="27.783453112s" podCreationTimestamp="2025-12-04 09:36:29 +0000 UTC" firstStartedPulling="2025-12-04 09:36:35.916328132 +0000 UTC m=+1062.668118346" lastFinishedPulling="2025-12-04 09:36:56.168617846 +0000 UTC m=+1082.920408060" observedRunningTime="2025-12-04 09:36:56.7804491 +0000 UTC m=+1083.532239324" watchObservedRunningTime="2025-12-04 09:36:56.783453112 +0000 UTC m=+1083.535243326" Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.824404 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-68c58" podStartSLOduration=8.824386402 podStartE2EDuration="8.824386402s" podCreationTimestamp="2025-12-04 09:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:36:56.82302099 +0000 UTC m=+1083.574811204" watchObservedRunningTime="2025-12-04 09:36:56.824386402 +0000 UTC m=+1083.576176606" Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.853125 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" podStartSLOduration=5.511992549 podStartE2EDuration="34.85310433s" podCreationTimestamp="2025-12-04 09:36:22 +0000 UTC" firstStartedPulling="2025-12-04 09:36:26.761413742 +0000 UTC m=+1053.513203946" lastFinishedPulling="2025-12-04 09:36:56.102525483 +0000 UTC m=+1082.854315727" observedRunningTime="2025-12-04 09:36:56.847996718 +0000 UTC m=+1083.599786932" watchObservedRunningTime="2025-12-04 09:36:56.85310433 +0000 UTC m=+1083.604894534" Dec 04 09:36:56 crc kubenswrapper[4841]: I1204 09:36:56.881982 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" podStartSLOduration=4.026583334 podStartE2EDuration="31.881826458s" podCreationTimestamp="2025-12-04 09:36:25 +0000 UTC" firstStartedPulling="2025-12-04 09:36:28.313380333 +0000 UTC m=+1055.065170537" lastFinishedPulling="2025-12-04 09:36:56.168623457 +0000 UTC m=+1082.920413661" observedRunningTime="2025-12-04 09:36:56.870272892 +0000 UTC m=+1083.622063096" watchObservedRunningTime="2025-12-04 09:36:56.881826458 +0000 UTC m=+1083.633616682" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.624721 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cfca47-f36a-411b-b206-2e60452f095d" path="/var/lib/kubelet/pods/c7cfca47-f36a-411b-b206-2e60452f095d/volumes" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.750707 4841 generic.go:334] "Generic (PLEG): container finished" podID="8175b165-39c3-489d-84e0-94fc420e7b87" containerID="2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f" exitCode=0 Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.750780 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerDied","Data":"2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f"} Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.750833 4841 scope.go:117] "RemoveContainer" containerID="a0b369b4efff86cb491c5a0b17ae0e067fc9ee60b412ac7545d6600b343e6a83" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.751384 4841 scope.go:117] "RemoveContainer" containerID="2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f" Dec 04 09:36:57 crc kubenswrapper[4841]: E1204 09:36:57.751596 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk_service-telemetry(8175b165-39c3-489d-84e0-94fc420e7b87)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" podUID="8175b165-39c3-489d-84e0-94fc420e7b87" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.754968 4841 generic.go:334] "Generic (PLEG): container finished" podID="29138a71-c959-4de4-8fc6-67573c77f301" containerID="20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071" exitCode=0 Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.755099 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerDied","Data":"20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071"} Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.755357 4841 scope.go:117] "RemoveContainer" containerID="20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071" Dec 04 09:36:57 crc kubenswrapper[4841]: E1204 09:36:57.755601 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll_service-telemetry(29138a71-c959-4de4-8fc6-67573c77f301)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" podUID="29138a71-c959-4de4-8fc6-67573c77f301" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.758241 4841 generic.go:334] "Generic (PLEG): container finished" podID="eb0cf76d-24b8-4be7-9161-58151b914f39" containerID="401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638" exitCode=0 Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.758294 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerDied","Data":"401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638"} Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.758611 4841 scope.go:117] "RemoveContainer" containerID="401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638" Dec 04 09:36:57 crc kubenswrapper[4841]: E1204 09:36:57.758809 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-587c778df-zj5mm_service-telemetry(eb0cf76d-24b8-4be7-9161-58151b914f39)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" podUID="eb0cf76d-24b8-4be7-9161-58151b914f39" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.768811 4841 generic.go:334] "Generic (PLEG): container finished" podID="b2c333ba-eeaf-493b-9e70-f3a0d2129d7c" containerID="7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba" exitCode=0 Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.769062 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerDied","Data":"7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba"} Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.769871 4841 scope.go:117] "RemoveContainer" containerID="7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba" Dec 04 09:36:57 crc kubenswrapper[4841]: E1204 09:36:57.770146 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j_service-telemetry(b2c333ba-eeaf-493b-9e70-f3a0d2129d7c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" podUID="b2c333ba-eeaf-493b-9e70-f3a0d2129d7c" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.783140 4841 generic.go:334] "Generic (PLEG): container finished" podID="d13cd4c4-469b-444a-b482-4bcb88d1721e" containerID="ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c" exitCode=0 Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.783280 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerDied","Data":"ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c"} Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.793089 4841 scope.go:117] "RemoveContainer" containerID="ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c" Dec 04 09:36:57 crc kubenswrapper[4841]: E1204 09:36:57.793359 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk_service-telemetry(d13cd4c4-469b-444a-b482-4bcb88d1721e)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" podUID="d13cd4c4-469b-444a-b482-4bcb88d1721e" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.795829 4841 scope.go:117] "RemoveContainer" containerID="4ee7784734a94fdcfffea2dba80ae7233e5210a95eac4c435c69dbd7205a50aa" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.925448 4841 scope.go:117] "RemoveContainer" containerID="0ee581510ab4f63337ae952627fee62a1d51443851fb9b2e90f908e2aa5f7d2a" Dec 04 09:36:57 crc kubenswrapper[4841]: I1204 09:36:57.971826 4841 scope.go:117] "RemoveContainer" containerID="4723620fb1d58dd58310b11dbd488332512b951df0f07c5f81d939e9f47b2ae2" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.015875 4841 scope.go:117] "RemoveContainer" containerID="66de7674cd806963dd229b1ca3a49398bc90a61bb77c16a38ac4baea04766e7f" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.793086 4841 scope.go:117] "RemoveContainer" containerID="7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba" Dec 04 09:36:58 crc kubenswrapper[4841]: E1204 09:36:58.793320 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j_service-telemetry(b2c333ba-eeaf-493b-9e70-f3a0d2129d7c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" podUID="b2c333ba-eeaf-493b-9e70-f3a0d2129d7c" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.797626 4841 scope.go:117] "RemoveContainer" containerID="ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c" Dec 04 09:36:58 crc kubenswrapper[4841]: E1204 09:36:58.798799 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk_service-telemetry(d13cd4c4-469b-444a-b482-4bcb88d1721e)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" podUID="d13cd4c4-469b-444a-b482-4bcb88d1721e" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.800734 4841 scope.go:117] "RemoveContainer" containerID="20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071" Dec 04 09:36:58 crc kubenswrapper[4841]: E1204 09:36:58.801869 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll_service-telemetry(29138a71-c959-4de4-8fc6-67573c77f301)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" podUID="29138a71-c959-4de4-8fc6-67573c77f301" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.805548 4841 scope.go:117] "RemoveContainer" containerID="401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638" Dec 04 09:36:58 crc kubenswrapper[4841]: E1204 09:36:58.805802 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-587c778df-zj5mm_service-telemetry(eb0cf76d-24b8-4be7-9161-58151b914f39)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" podUID="eb0cf76d-24b8-4be7-9161-58151b914f39" Dec 04 09:36:58 crc kubenswrapper[4841]: I1204 09:36:58.809736 4841 scope.go:117] "RemoveContainer" containerID="2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f" Dec 04 09:36:58 crc kubenswrapper[4841]: E1204 09:36:58.810233 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk_service-telemetry(8175b165-39c3-489d-84e0-94fc420e7b87)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" podUID="8175b165-39c3-489d-84e0-94fc420e7b87" Dec 04 09:37:06 crc kubenswrapper[4841]: I1204 09:37:06.882904 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 04 09:37:06 crc kubenswrapper[4841]: I1204 09:37:06.884296 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 04 09:37:06 crc kubenswrapper[4841]: I1204 09:37:06.886825 4841 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 04 09:37:06 crc kubenswrapper[4841]: I1204 09:37:06.887435 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 04 09:37:06 crc kubenswrapper[4841]: I1204 09:37:06.896201 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.034710 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/e2fc172e-6b3d-4972-b353-0db18593824c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.034819 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb94l\" (UniqueName: \"kubernetes.io/projected/e2fc172e-6b3d-4972-b353-0db18593824c-kube-api-access-sb94l\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.035180 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/e2fc172e-6b3d-4972-b353-0db18593824c-qdr-test-config\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.137079 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/e2fc172e-6b3d-4972-b353-0db18593824c-qdr-test-config\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.137269 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/e2fc172e-6b3d-4972-b353-0db18593824c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.137335 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb94l\" (UniqueName: \"kubernetes.io/projected/e2fc172e-6b3d-4972-b353-0db18593824c-kube-api-access-sb94l\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.138116 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/e2fc172e-6b3d-4972-b353-0db18593824c-qdr-test-config\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.146239 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/e2fc172e-6b3d-4972-b353-0db18593824c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.163134 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb94l\" (UniqueName: \"kubernetes.io/projected/e2fc172e-6b3d-4972-b353-0db18593824c-kube-api-access-sb94l\") pod \"qdr-test\" (UID: \"e2fc172e-6b3d-4972-b353-0db18593824c\") " pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.199531 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.697031 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 04 09:37:07 crc kubenswrapper[4841]: I1204 09:37:07.878535 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"e2fc172e-6b3d-4972-b353-0db18593824c","Type":"ContainerStarted","Data":"0688e7cb7fdd43e9b5016461f0933302983548169b883059f4f24442b7e64b56"} Dec 04 09:37:10 crc kubenswrapper[4841]: I1204 09:37:10.617150 4841 scope.go:117] "RemoveContainer" containerID="2a0785f5c13cbca0e9dedf8243496aa901180d696b486718823c9001e981441f" Dec 04 09:37:10 crc kubenswrapper[4841]: I1204 09:37:10.617358 4841 scope.go:117] "RemoveContainer" containerID="7d392049dabc469448d515acd7746cc399295c9ca29caca28ad07f0e5b4907ba" Dec 04 09:37:13 crc kubenswrapper[4841]: I1204 09:37:13.621028 4841 scope.go:117] "RemoveContainer" containerID="401c692a52fecb578695b9665ac02c323f10f42c677f88a8811b587937711638" Dec 04 09:37:13 crc kubenswrapper[4841]: I1204 09:37:13.625067 4841 scope.go:117] "RemoveContainer" containerID="ac76e3c8bb2526d891a434fd1948581072414bf955de23dc80d132cd83a0f43c" Dec 04 09:37:13 crc kubenswrapper[4841]: I1204 09:37:13.625486 4841 scope.go:117] "RemoveContainer" containerID="20157e53f14724e157c09cb80276d5d6f9aa48f8270419bbcffbe7e851240071" Dec 04 09:37:16 crc kubenswrapper[4841]: I1204 09:37:16.958813 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"e2fc172e-6b3d-4972-b353-0db18593824c","Type":"ContainerStarted","Data":"6212805f1019a6b4c027672512393005b977d6eef61b734fe821e39ff26cb505"} Dec 04 09:37:16 crc kubenswrapper[4841]: I1204 09:37:16.982252 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.417346447 podStartE2EDuration="10.982232371s" podCreationTimestamp="2025-12-04 09:37:06 +0000 UTC" firstStartedPulling="2025-12-04 09:37:07.702977717 +0000 UTC m=+1094.454767921" lastFinishedPulling="2025-12-04 09:37:16.267863641 +0000 UTC m=+1103.019653845" observedRunningTime="2025-12-04 09:37:16.976643386 +0000 UTC m=+1103.728433600" watchObservedRunningTime="2025-12-04 09:37:16.982232371 +0000 UTC m=+1103.734022575" Dec 04 09:37:17 crc kubenswrapper[4841]: I1204 09:37:17.971108 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j" event={"ID":"b2c333ba-eeaf-493b-9e70-f3a0d2129d7c","Type":"ContainerStarted","Data":"79a52646c7ce1f5a783ec64173df606477ec4c70ec8dbc3b876e71a2a021b3d9"} Dec 04 09:37:17 crc kubenswrapper[4841]: I1204 09:37:17.978452 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk" event={"ID":"d13cd4c4-469b-444a-b482-4bcb88d1721e","Type":"ContainerStarted","Data":"8ec8fa144b2db208dd8c20eb4da80d1cbf3bd90bb650cdf66b8f5f1fb47565f7"} Dec 04 09:37:17 crc kubenswrapper[4841]: I1204 09:37:17.983060 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk" event={"ID":"8175b165-39c3-489d-84e0-94fc420e7b87","Type":"ContainerStarted","Data":"b111928a336fb67b65e1f23df101df8239b3ec5cc825f9579a6109b4b6d76f06"} Dec 04 09:37:17 crc kubenswrapper[4841]: I1204 09:37:17.990037 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll" event={"ID":"29138a71-c959-4de4-8fc6-67573c77f301","Type":"ContainerStarted","Data":"fef4dbe0b5950f0065406bbead10ded20e1634b560b1e51a809101c9f7401b2c"} Dec 04 09:37:17 crc kubenswrapper[4841]: I1204 09:37:17.993654 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-587c778df-zj5mm" event={"ID":"eb0cf76d-24b8-4be7-9161-58151b914f39","Type":"ContainerStarted","Data":"d2a587d2f001fbe87e8fecbd38754a1ebe291d3b76910c5ee3fc8d2dacf121b0"} Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.083623 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-gp8cz"] Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.085893 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.102246 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.102533 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.102539 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.102716 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.102685 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.103322 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.103577 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-gp8cz"] Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109338 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109385 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109407 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109439 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsftg\" (UniqueName: \"kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109472 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109508 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.109542 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211713 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211804 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211839 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211890 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211918 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.211963 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsftg\" (UniqueName: \"kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.212008 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.213529 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.214691 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.214851 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.214873 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.215604 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.215696 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.262523 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsftg\" (UniqueName: \"kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg\") pod \"stf-smoketest-smoke1-gp8cz\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.409796 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.567806 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.569065 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.582495 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.619806 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhcb\" (UniqueName: \"kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb\") pod \"curl\" (UID: \"45318b6a-913d-4c01-bf86-934cdd290565\") " pod="service-telemetry/curl" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.704405 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-gp8cz"] Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.721632 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhcb\" (UniqueName: \"kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb\") pod \"curl\" (UID: \"45318b6a-913d-4c01-bf86-934cdd290565\") " pod="service-telemetry/curl" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.741233 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhcb\" (UniqueName: \"kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb\") pod \"curl\" (UID: \"45318b6a-913d-4c01-bf86-934cdd290565\") " pod="service-telemetry/curl" Dec 04 09:37:18 crc kubenswrapper[4841]: I1204 09:37:18.916135 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 04 09:37:19 crc kubenswrapper[4841]: I1204 09:37:19.005041 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerStarted","Data":"ee77e0f6330462f25c69b102d409a502854f5ccb7cdbd5c9dd963d88a40f83e1"} Dec 04 09:37:19 crc kubenswrapper[4841]: I1204 09:37:19.152550 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 04 09:37:19 crc kubenswrapper[4841]: W1204 09:37:19.167205 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45318b6a_913d_4c01_bf86_934cdd290565.slice/crio-0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0 WatchSource:0}: Error finding container 0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0: Status 404 returned error can't find the container with id 0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0 Dec 04 09:37:20 crc kubenswrapper[4841]: I1204 09:37:20.012618 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"45318b6a-913d-4c01-bf86-934cdd290565","Type":"ContainerStarted","Data":"0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0"} Dec 04 09:37:26 crc kubenswrapper[4841]: I1204 09:37:26.064405 4841 generic.go:334] "Generic (PLEG): container finished" podID="45318b6a-913d-4c01-bf86-934cdd290565" containerID="55eae7b3d42e359488559bc32c9931bc46efb9d339886970e3a765431c8f73df" exitCode=0 Dec 04 09:37:26 crc kubenswrapper[4841]: I1204 09:37:26.064626 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"45318b6a-913d-4c01-bf86-934cdd290565","Type":"ContainerDied","Data":"55eae7b3d42e359488559bc32c9931bc46efb9d339886970e3a765431c8f73df"} Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.305753 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.432209 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhcb\" (UniqueName: \"kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb\") pod \"45318b6a-913d-4c01-bf86-934cdd290565\" (UID: \"45318b6a-913d-4c01-bf86-934cdd290565\") " Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.448129 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb" (OuterVolumeSpecName: "kube-api-access-bhhcb") pod "45318b6a-913d-4c01-bf86-934cdd290565" (UID: "45318b6a-913d-4c01-bf86-934cdd290565"). InnerVolumeSpecName "kube-api-access-bhhcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.471346 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_45318b6a-913d-4c01-bf86-934cdd290565/curl/0.log" Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.533401 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhcb\" (UniqueName: \"kubernetes.io/projected/45318b6a-913d-4c01-bf86-934cdd290565-kube-api-access-bhhcb\") on node \"crc\" DevicePath \"\"" Dec 04 09:37:30 crc kubenswrapper[4841]: I1204 09:37:30.757046 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-xfd25_52131b4e-c5ec-483e-8288-12a5fc2d9897/prometheus-webhook-snmp/0.log" Dec 04 09:37:31 crc kubenswrapper[4841]: I1204 09:37:31.098493 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"45318b6a-913d-4c01-bf86-934cdd290565","Type":"ContainerDied","Data":"0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0"} Dec 04 09:37:31 crc kubenswrapper[4841]: I1204 09:37:31.098537 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0918977ad0ddaf0b07159ae3f3712af54cbe3546e97b8fbc0fe05b521fc440c0" Dec 04 09:37:31 crc kubenswrapper[4841]: I1204 09:37:31.098588 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 04 09:37:35 crc kubenswrapper[4841]: I1204 09:37:35.135752 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerStarted","Data":"c8ef3d7e93f546867c119ba279f32b1dae3a9f0894930fe0e6132ce66c649305"} Dec 04 09:37:46 crc kubenswrapper[4841]: I1204 09:37:46.232390 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerStarted","Data":"6609cb632d29674dd2fa251674fead227750c28eea284331b214f682525abbd2"} Dec 04 09:37:46 crc kubenswrapper[4841]: I1204 09:37:46.272909 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" podStartSLOduration=1.5323320599999999 podStartE2EDuration="28.272825058s" podCreationTimestamp="2025-12-04 09:37:18 +0000 UTC" firstStartedPulling="2025-12-04 09:37:18.710414135 +0000 UTC m=+1105.462204339" lastFinishedPulling="2025-12-04 09:37:45.450907123 +0000 UTC m=+1132.202697337" observedRunningTime="2025-12-04 09:37:46.258468471 +0000 UTC m=+1133.010258685" watchObservedRunningTime="2025-12-04 09:37:46.272825058 +0000 UTC m=+1133.024615302" Dec 04 09:37:50 crc kubenswrapper[4841]: I1204 09:37:50.498125 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:37:50 crc kubenswrapper[4841]: I1204 09:37:50.498485 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:38:00 crc kubenswrapper[4841]: I1204 09:38:00.965967 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-xfd25_52131b4e-c5ec-483e-8288-12a5fc2d9897/prometheus-webhook-snmp/0.log" Dec 04 09:38:08 crc kubenswrapper[4841]: I1204 09:38:08.426053 4841 generic.go:334] "Generic (PLEG): container finished" podID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerID="c8ef3d7e93f546867c119ba279f32b1dae3a9f0894930fe0e6132ce66c649305" exitCode=1 Dec 04 09:38:08 crc kubenswrapper[4841]: I1204 09:38:08.426166 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerDied","Data":"c8ef3d7e93f546867c119ba279f32b1dae3a9f0894930fe0e6132ce66c649305"} Dec 04 09:38:08 crc kubenswrapper[4841]: I1204 09:38:08.427722 4841 scope.go:117] "RemoveContainer" containerID="c8ef3d7e93f546867c119ba279f32b1dae3a9f0894930fe0e6132ce66c649305" Dec 04 09:38:17 crc kubenswrapper[4841]: I1204 09:38:17.512693 4841 generic.go:334] "Generic (PLEG): container finished" podID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerID="6609cb632d29674dd2fa251674fead227750c28eea284331b214f682525abbd2" exitCode=1 Dec 04 09:38:17 crc kubenswrapper[4841]: I1204 09:38:17.512838 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerDied","Data":"6609cb632d29674dd2fa251674fead227750c28eea284331b214f682525abbd2"} Dec 04 09:38:18 crc kubenswrapper[4841]: I1204 09:38:18.918640 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029231 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsftg\" (UniqueName: \"kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029273 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029326 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029392 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029421 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029455 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.029486 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script\") pod \"185d02f0-ff6b-4189-87bc-961edfc4f597\" (UID: \"185d02f0-ff6b-4189-87bc-961edfc4f597\") " Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.046446 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.052030 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg" (OuterVolumeSpecName: "kube-api-access-rsftg") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "kube-api-access-rsftg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.053846 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.056527 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.057645 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.059009 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.059984 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "185d02f0-ff6b-4189-87bc-961edfc4f597" (UID: "185d02f0-ff6b-4189-87bc-961edfc4f597"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.131952 4841 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132008 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132033 4841 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132052 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132072 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132092 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/185d02f0-ff6b-4189-87bc-961edfc4f597-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.132112 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsftg\" (UniqueName: \"kubernetes.io/projected/185d02f0-ff6b-4189-87bc-961edfc4f597-kube-api-access-rsftg\") on node \"crc\" DevicePath \"\"" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.534084 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" event={"ID":"185d02f0-ff6b-4189-87bc-961edfc4f597","Type":"ContainerDied","Data":"ee77e0f6330462f25c69b102d409a502854f5ccb7cdbd5c9dd963d88a40f83e1"} Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.534133 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee77e0f6330462f25c69b102d409a502854f5ccb7cdbd5c9dd963d88a40f83e1" Dec 04 09:38:19 crc kubenswrapper[4841]: I1204 09:38:19.534133 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-gp8cz" Dec 04 09:38:20 crc kubenswrapper[4841]: I1204 09:38:20.498078 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:38:20 crc kubenswrapper[4841]: I1204 09:38:20.498417 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.039932 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-wm4kc"] Dec 04 09:38:27 crc kubenswrapper[4841]: E1204 09:38:27.041161 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45318b6a-913d-4c01-bf86-934cdd290565" containerName="curl" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041193 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="45318b6a-913d-4c01-bf86-934cdd290565" containerName="curl" Dec 04 09:38:27 crc kubenswrapper[4841]: E1204 09:38:27.041240 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-collectd" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041259 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-collectd" Dec 04 09:38:27 crc kubenswrapper[4841]: E1204 09:38:27.041291 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-ceilometer" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041308 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-ceilometer" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041623 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-ceilometer" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041668 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="45318b6a-913d-4c01-bf86-934cdd290565" containerName="curl" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.041700 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="185d02f0-ff6b-4189-87bc-961edfc4f597" containerName="smoketest-collectd" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.045736 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.049312 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.049311 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.050208 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.050237 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.050709 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.051035 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.054692 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-wm4kc"] Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.167922 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.167998 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.168093 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.168125 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.168151 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.168237 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.168275 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4nh\" (UniqueName: \"kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269490 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269575 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4nh\" (UniqueName: \"kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269639 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269707 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269878 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.269931 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.270002 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.271065 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.271502 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.271660 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.271864 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.272279 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.272557 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.303052 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4nh\" (UniqueName: \"kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh\") pod \"stf-smoketest-smoke1-wm4kc\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.370241 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:38:27 crc kubenswrapper[4841]: I1204 09:38:27.632063 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-wm4kc"] Dec 04 09:38:28 crc kubenswrapper[4841]: I1204 09:38:28.619034 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerStarted","Data":"cfd712299fe5a6ad742ab809599338e679804b227a6f1f7311cd1789741f45e5"} Dec 04 09:38:28 crc kubenswrapper[4841]: I1204 09:38:28.619452 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerStarted","Data":"d7ccfa8a663dcea281ad931ede35d96e737cf0b5444b2aff71f729760ef2e3cb"} Dec 04 09:38:28 crc kubenswrapper[4841]: I1204 09:38:28.619469 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerStarted","Data":"aceacf3d8e57b3b3af56114101dabaf427bb0667c515e1b008f32ec6903d655d"} Dec 04 09:38:28 crc kubenswrapper[4841]: I1204 09:38:28.651443 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" podStartSLOduration=1.6514214900000002 podStartE2EDuration="1.65142149s" podCreationTimestamp="2025-12-04 09:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:38:28.648358097 +0000 UTC m=+1175.400148311" watchObservedRunningTime="2025-12-04 09:38:28.65142149 +0000 UTC m=+1175.403211704" Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.498103 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.498803 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.498876 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.499706 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.499835 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77" gracePeriod=600 Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.826498 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77" exitCode=0 Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.826611 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77"} Dec 04 09:38:50 crc kubenswrapper[4841]: I1204 09:38:50.827489 4841 scope.go:117] "RemoveContainer" containerID="ac966342fc1483cff7083af17cc1e40ce4b6cc956c6529691732d65680c9dfa4" Dec 04 09:38:51 crc kubenswrapper[4841]: I1204 09:38:51.840247 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e"} Dec 04 09:38:59 crc kubenswrapper[4841]: I1204 09:38:59.930597 4841 generic.go:334] "Generic (PLEG): container finished" podID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerID="cfd712299fe5a6ad742ab809599338e679804b227a6f1f7311cd1789741f45e5" exitCode=1 Dec 04 09:38:59 crc kubenswrapper[4841]: I1204 09:38:59.930651 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerDied","Data":"cfd712299fe5a6ad742ab809599338e679804b227a6f1f7311cd1789741f45e5"} Dec 04 09:38:59 crc kubenswrapper[4841]: I1204 09:38:59.932121 4841 scope.go:117] "RemoveContainer" containerID="cfd712299fe5a6ad742ab809599338e679804b227a6f1f7311cd1789741f45e5" Dec 04 09:39:01 crc kubenswrapper[4841]: I1204 09:39:01.950964 4841 generic.go:334] "Generic (PLEG): container finished" podID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerID="d7ccfa8a663dcea281ad931ede35d96e737cf0b5444b2aff71f729760ef2e3cb" exitCode=1 Dec 04 09:39:01 crc kubenswrapper[4841]: I1204 09:39:01.951098 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerDied","Data":"d7ccfa8a663dcea281ad931ede35d96e737cf0b5444b2aff71f729760ef2e3cb"} Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.315693 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485308 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485386 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485453 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485536 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485605 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485643 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h4nh\" (UniqueName: \"kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.485727 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script\") pod \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\" (UID: \"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b\") " Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.494880 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh" (OuterVolumeSpecName: "kube-api-access-9h4nh") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "kube-api-access-9h4nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.517281 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.518278 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.519647 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.520670 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.522266 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.526015 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" (UID: "8f3426f4-b547-4a51-b4e1-16aa23c8ab1b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587794 4841 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587841 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587867 4841 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587887 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587907 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587926 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h4nh\" (UniqueName: \"kubernetes.io/projected/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-kube-api-access-9h4nh\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.587944 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/8f3426f4-b547-4a51-b4e1-16aa23c8ab1b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.974295 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" event={"ID":"8f3426f4-b547-4a51-b4e1-16aa23c8ab1b","Type":"ContainerDied","Data":"aceacf3d8e57b3b3af56114101dabaf427bb0667c515e1b008f32ec6903d655d"} Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.974351 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aceacf3d8e57b3b3af56114101dabaf427bb0667c515e1b008f32ec6903d655d" Dec 04 09:39:03 crc kubenswrapper[4841]: I1204 09:39:03.974435 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-wm4kc" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.419517 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:06 crc kubenswrapper[4841]: E1204 09:39:06.420630 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-ceilometer" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.420668 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-ceilometer" Dec 04 09:39:06 crc kubenswrapper[4841]: E1204 09:39:06.420797 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-collectd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.420819 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-collectd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.421114 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-ceilometer" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.421152 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3426f4-b547-4a51-b4e1-16aa23c8ab1b" containerName="smoketest-collectd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.422090 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.448352 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp9mc\" (UniqueName: \"kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc\") pod \"infrawatch-operators-4f6dd\" (UID: \"a318ddbc-806f-4844-93fb-dac38c1d2b73\") " pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.472253 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.549687 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp9mc\" (UniqueName: \"kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc\") pod \"infrawatch-operators-4f6dd\" (UID: \"a318ddbc-806f-4844-93fb-dac38c1d2b73\") " pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.578493 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp9mc\" (UniqueName: \"kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc\") pod \"infrawatch-operators-4f6dd\" (UID: \"a318ddbc-806f-4844-93fb-dac38c1d2b73\") " pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:06 crc kubenswrapper[4841]: I1204 09:39:06.784404 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:07 crc kubenswrapper[4841]: W1204 09:39:07.032115 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda318ddbc_806f_4844_93fb_dac38c1d2b73.slice/crio-20a91d6e64da0e1d50b93187129561990a194b5583bf1bf4282c9914c6632251 WatchSource:0}: Error finding container 20a91d6e64da0e1d50b93187129561990a194b5583bf1bf4282c9914c6632251: Status 404 returned error can't find the container with id 20a91d6e64da0e1d50b93187129561990a194b5583bf1bf4282c9914c6632251 Dec 04 09:39:07 crc kubenswrapper[4841]: I1204 09:39:07.032177 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:08 crc kubenswrapper[4841]: I1204 09:39:08.021972 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4f6dd" event={"ID":"a318ddbc-806f-4844-93fb-dac38c1d2b73","Type":"ContainerStarted","Data":"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808"} Dec 04 09:39:08 crc kubenswrapper[4841]: I1204 09:39:08.022640 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4f6dd" event={"ID":"a318ddbc-806f-4844-93fb-dac38c1d2b73","Type":"ContainerStarted","Data":"20a91d6e64da0e1d50b93187129561990a194b5583bf1bf4282c9914c6632251"} Dec 04 09:39:16 crc kubenswrapper[4841]: I1204 09:39:16.785241 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:16 crc kubenswrapper[4841]: I1204 09:39:16.786231 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:16 crc kubenswrapper[4841]: I1204 09:39:16.835891 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:16 crc kubenswrapper[4841]: I1204 09:39:16.865673 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-4f6dd" podStartSLOduration=10.74718377 podStartE2EDuration="10.865648876s" podCreationTimestamp="2025-12-04 09:39:06 +0000 UTC" firstStartedPulling="2025-12-04 09:39:07.034395505 +0000 UTC m=+1213.786185729" lastFinishedPulling="2025-12-04 09:39:07.152860621 +0000 UTC m=+1213.904650835" observedRunningTime="2025-12-04 09:39:08.047345926 +0000 UTC m=+1214.799136160" watchObservedRunningTime="2025-12-04 09:39:16.865648876 +0000 UTC m=+1223.617439120" Dec 04 09:39:17 crc kubenswrapper[4841]: I1204 09:39:17.159317 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:18 crc kubenswrapper[4841]: I1204 09:39:18.401334 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:19 crc kubenswrapper[4841]: I1204 09:39:19.127558 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-4f6dd" podUID="a318ddbc-806f-4844-93fb-dac38c1d2b73" containerName="registry-server" containerID="cri-o://eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808" gracePeriod=2 Dec 04 09:39:19 crc kubenswrapper[4841]: I1204 09:39:19.618562 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:19 crc kubenswrapper[4841]: I1204 09:39:19.657972 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp9mc\" (UniqueName: \"kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc\") pod \"a318ddbc-806f-4844-93fb-dac38c1d2b73\" (UID: \"a318ddbc-806f-4844-93fb-dac38c1d2b73\") " Dec 04 09:39:19 crc kubenswrapper[4841]: I1204 09:39:19.664826 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc" (OuterVolumeSpecName: "kube-api-access-tp9mc") pod "a318ddbc-806f-4844-93fb-dac38c1d2b73" (UID: "a318ddbc-806f-4844-93fb-dac38c1d2b73"). InnerVolumeSpecName "kube-api-access-tp9mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:39:19 crc kubenswrapper[4841]: I1204 09:39:19.760288 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp9mc\" (UniqueName: \"kubernetes.io/projected/a318ddbc-806f-4844-93fb-dac38c1d2b73-kube-api-access-tp9mc\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.139880 4841 generic.go:334] "Generic (PLEG): container finished" podID="a318ddbc-806f-4844-93fb-dac38c1d2b73" containerID="eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808" exitCode=0 Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.139918 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4f6dd" event={"ID":"a318ddbc-806f-4844-93fb-dac38c1d2b73","Type":"ContainerDied","Data":"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808"} Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.139941 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4f6dd" event={"ID":"a318ddbc-806f-4844-93fb-dac38c1d2b73","Type":"ContainerDied","Data":"20a91d6e64da0e1d50b93187129561990a194b5583bf1bf4282c9914c6632251"} Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.139958 4841 scope.go:117] "RemoveContainer" containerID="eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808" Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.140645 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4f6dd" Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.182178 4841 scope.go:117] "RemoveContainer" containerID="eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808" Dec 04 09:39:20 crc kubenswrapper[4841]: E1204 09:39:20.182823 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808\": container with ID starting with eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808 not found: ID does not exist" containerID="eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808" Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.182874 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808"} err="failed to get container status \"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808\": rpc error: code = NotFound desc = could not find container \"eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808\": container with ID starting with eebddd9ffe1951c541548145dc44bbbddd57faab15aa957b74b0b78e2fbb7808 not found: ID does not exist" Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.188708 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:20 crc kubenswrapper[4841]: I1204 09:39:20.195565 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-4f6dd"] Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.030799 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5wmwb"] Dec 04 09:39:21 crc kubenswrapper[4841]: E1204 09:39:21.031448 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a318ddbc-806f-4844-93fb-dac38c1d2b73" containerName="registry-server" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.031463 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="a318ddbc-806f-4844-93fb-dac38c1d2b73" containerName="registry-server" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.031627 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="a318ddbc-806f-4844-93fb-dac38c1d2b73" containerName="registry-server" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.032672 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.036533 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.036612 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.036924 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.037101 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.040243 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.040288 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.060575 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5wmwb"] Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.082968 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083044 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083163 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083249 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083312 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083352 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ncz\" (UniqueName: \"kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.083398 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.184904 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185098 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185130 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ncz\" (UniqueName: \"kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185155 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185219 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185243 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.185344 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.186905 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.187240 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.187501 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.189216 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.191185 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.192361 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.216587 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ncz\" (UniqueName: \"kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz\") pod \"stf-smoketest-smoke1-5wmwb\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.362086 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.629226 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a318ddbc-806f-4844-93fb-dac38c1d2b73" path="/var/lib/kubelet/pods/a318ddbc-806f-4844-93fb-dac38c1d2b73/volumes" Dec 04 09:39:21 crc kubenswrapper[4841]: I1204 09:39:21.684856 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-5wmwb"] Dec 04 09:39:22 crc kubenswrapper[4841]: I1204 09:39:22.164951 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerStarted","Data":"1fc41f75d33a09e0f10b4e6aeb72afa6d44121d7223b049f9a87dae5638d7c86"} Dec 04 09:39:22 crc kubenswrapper[4841]: I1204 09:39:22.165494 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerStarted","Data":"f4605c3e505b14498e9376044024262a189f689dad0a65f24a1457f6d59fcd52"} Dec 04 09:39:22 crc kubenswrapper[4841]: I1204 09:39:22.165516 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerStarted","Data":"cc4b09b6da5238e38d27d04411683a9a8a3fd1768e6c0893ab823704e79715f2"} Dec 04 09:39:22 crc kubenswrapper[4841]: I1204 09:39:22.204429 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" podStartSLOduration=1.204400988 podStartE2EDuration="1.204400988s" podCreationTimestamp="2025-12-04 09:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:39:22.190695618 +0000 UTC m=+1228.942485862" watchObservedRunningTime="2025-12-04 09:39:22.204400988 +0000 UTC m=+1228.956191232" Dec 04 09:39:54 crc kubenswrapper[4841]: I1204 09:39:54.980752 4841 generic.go:334] "Generic (PLEG): container finished" podID="6c917585-1773-498c-a549-6c8546c23406" containerID="1fc41f75d33a09e0f10b4e6aeb72afa6d44121d7223b049f9a87dae5638d7c86" exitCode=1 Dec 04 09:39:54 crc kubenswrapper[4841]: I1204 09:39:54.980807 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerDied","Data":"1fc41f75d33a09e0f10b4e6aeb72afa6d44121d7223b049f9a87dae5638d7c86"} Dec 04 09:39:54 crc kubenswrapper[4841]: I1204 09:39:54.983315 4841 scope.go:117] "RemoveContainer" containerID="1fc41f75d33a09e0f10b4e6aeb72afa6d44121d7223b049f9a87dae5638d7c86" Dec 04 09:39:56 crc kubenswrapper[4841]: I1204 09:39:56.001745 4841 generic.go:334] "Generic (PLEG): container finished" podID="6c917585-1773-498c-a549-6c8546c23406" containerID="f4605c3e505b14498e9376044024262a189f689dad0a65f24a1457f6d59fcd52" exitCode=1 Dec 04 09:39:56 crc kubenswrapper[4841]: I1204 09:39:56.001822 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerDied","Data":"f4605c3e505b14498e9376044024262a189f689dad0a65f24a1457f6d59fcd52"} Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.297118 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405196 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405261 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405306 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405322 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ncz\" (UniqueName: \"kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405344 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405399 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.405426 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script\") pod \"6c917585-1773-498c-a549-6c8546c23406\" (UID: \"6c917585-1773-498c-a549-6c8546c23406\") " Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.413000 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz" (OuterVolumeSpecName: "kube-api-access-x6ncz") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "kube-api-access-x6ncz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.423635 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.426394 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.427294 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.427984 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.431240 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.434843 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "6c917585-1773-498c-a549-6c8546c23406" (UID: "6c917585-1773-498c-a549-6c8546c23406"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.506958 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507011 4841 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507032 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6ncz\" (UniqueName: \"kubernetes.io/projected/6c917585-1773-498c-a549-6c8546c23406-kube-api-access-x6ncz\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507054 4841 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507074 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507093 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:57 crc kubenswrapper[4841]: I1204 09:39:57.507111 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c917585-1773-498c-a549-6c8546c23406-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:39:58 crc kubenswrapper[4841]: I1204 09:39:58.024985 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" event={"ID":"6c917585-1773-498c-a549-6c8546c23406","Type":"ContainerDied","Data":"cc4b09b6da5238e38d27d04411683a9a8a3fd1768e6c0893ab823704e79715f2"} Dec 04 09:39:58 crc kubenswrapper[4841]: I1204 09:39:58.025032 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4b09b6da5238e38d27d04411683a9a8a3fd1768e6c0893ab823704e79715f2" Dec 04 09:39:58 crc kubenswrapper[4841]: I1204 09:39:58.025053 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-5wmwb" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.045393 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-6k76g"] Dec 04 09:40:35 crc kubenswrapper[4841]: E1204 09:40:35.050000 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-ceilometer" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.050019 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-ceilometer" Dec 04 09:40:35 crc kubenswrapper[4841]: E1204 09:40:35.050036 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-collectd" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.050044 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-collectd" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.050179 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-ceilometer" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.050197 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c917585-1773-498c-a549-6c8546c23406" containerName="smoketest-collectd" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.051102 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.054509 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.055656 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.056047 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.056408 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.056582 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.057049 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.058563 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-6k76g"] Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.085663 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hkr7\" (UniqueName: \"kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.085801 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.085897 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.085965 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.086048 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.086166 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.086224 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.187875 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hkr7\" (UniqueName: \"kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.187957 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.188016 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.188056 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.188110 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.188189 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.188228 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.190134 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.190689 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.191713 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.191839 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.192284 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.192363 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.227831 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hkr7\" (UniqueName: \"kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7\") pod \"stf-smoketest-smoke1-6k76g\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.376095 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:40:35 crc kubenswrapper[4841]: I1204 09:40:35.684436 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-6k76g"] Dec 04 09:40:36 crc kubenswrapper[4841]: I1204 09:40:36.378118 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerStarted","Data":"b2f2e71b69251d934e2de890abf95ccc96321a89f37badd7b747e9957f0bcf7c"} Dec 04 09:40:36 crc kubenswrapper[4841]: I1204 09:40:36.378589 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerStarted","Data":"2d4d022c2c4968cc3e0c6345b8a310ae37a24061a90c38adc9aa597e9fbdbf5b"} Dec 04 09:40:36 crc kubenswrapper[4841]: I1204 09:40:36.378615 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerStarted","Data":"e34be5c00990a33ead38b39a271d212ac16fd65094d2cb2b76590e68d49a1431"} Dec 04 09:40:36 crc kubenswrapper[4841]: I1204 09:40:36.417753 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-6k76g" podStartSLOduration=1.41773386 podStartE2EDuration="1.41773386s" podCreationTimestamp="2025-12-04 09:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 09:40:36.406661602 +0000 UTC m=+1303.158451836" watchObservedRunningTime="2025-12-04 09:40:36.41773386 +0000 UTC m=+1303.169524084" Dec 04 09:40:50 crc kubenswrapper[4841]: I1204 09:40:50.498272 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:40:50 crc kubenswrapper[4841]: I1204 09:40:50.498944 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:41:08 crc kubenswrapper[4841]: I1204 09:41:08.706033 4841 generic.go:334] "Generic (PLEG): container finished" podID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerID="b2f2e71b69251d934e2de890abf95ccc96321a89f37badd7b747e9957f0bcf7c" exitCode=0 Dec 04 09:41:08 crc kubenswrapper[4841]: I1204 09:41:08.706170 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerDied","Data":"b2f2e71b69251d934e2de890abf95ccc96321a89f37badd7b747e9957f0bcf7c"} Dec 04 09:41:08 crc kubenswrapper[4841]: I1204 09:41:08.707893 4841 scope.go:117] "RemoveContainer" containerID="b2f2e71b69251d934e2de890abf95ccc96321a89f37badd7b747e9957f0bcf7c" Dec 04 09:41:09 crc kubenswrapper[4841]: I1204 09:41:09.720561 4841 generic.go:334] "Generic (PLEG): container finished" podID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerID="2d4d022c2c4968cc3e0c6345b8a310ae37a24061a90c38adc9aa597e9fbdbf5b" exitCode=0 Dec 04 09:41:09 crc kubenswrapper[4841]: I1204 09:41:09.720637 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerDied","Data":"2d4d022c2c4968cc3e0c6345b8a310ae37a24061a90c38adc9aa597e9fbdbf5b"} Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.081992 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.179700 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.179818 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.179908 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.179978 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.180038 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.180185 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hkr7\" (UniqueName: \"kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.180218 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log\") pod \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\" (UID: \"4b7e005f-1051-4c6b-b135-6ebdc280f4ba\") " Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.187654 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7" (OuterVolumeSpecName: "kube-api-access-5hkr7") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "kube-api-access-5hkr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.202158 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.218653 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.219707 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.219855 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.221671 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.222430 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "4b7e005f-1051-4c6b-b135-6ebdc280f4ba" (UID: "4b7e005f-1051-4c6b-b135-6ebdc280f4ba"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283044 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283097 4841 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283118 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283149 4841 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283169 4841 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283192 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hkr7\" (UniqueName: \"kubernetes.io/projected/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-kube-api-access-5hkr7\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.283211 4841 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/4b7e005f-1051-4c6b-b135-6ebdc280f4ba-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.742293 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-6k76g" event={"ID":"4b7e005f-1051-4c6b-b135-6ebdc280f4ba","Type":"ContainerDied","Data":"e34be5c00990a33ead38b39a271d212ac16fd65094d2cb2b76590e68d49a1431"} Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.742353 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e34be5c00990a33ead38b39a271d212ac16fd65094d2cb2b76590e68d49a1431" Dec 04 09:41:11 crc kubenswrapper[4841]: I1204 09:41:11.742737 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-6k76g" Dec 04 09:41:13 crc kubenswrapper[4841]: I1204 09:41:13.256749 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-5wmwb_6c917585-1773-498c-a549-6c8546c23406/smoketest-collectd/0.log" Dec 04 09:41:13 crc kubenswrapper[4841]: I1204 09:41:13.550138 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-5wmwb_6c917585-1773-498c-a549-6c8546c23406/smoketest-ceilometer/0.log" Dec 04 09:41:13 crc kubenswrapper[4841]: I1204 09:41:13.849207 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-68c58_c7f39520-aec3-412e-b0fe-358e97d00b51/default-interconnect/0.log" Dec 04 09:41:14 crc kubenswrapper[4841]: I1204 09:41:14.174244 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j_b2c333ba-eeaf-493b-9e70-f3a0d2129d7c/bridge/2.log" Dec 04 09:41:14 crc kubenswrapper[4841]: I1204 09:41:14.479754 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-2hk9j_b2c333ba-eeaf-493b-9e70-f3a0d2129d7c/sg-core/0.log" Dec 04 09:41:14 crc kubenswrapper[4841]: I1204 09:41:14.758146 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk_8175b165-39c3-489d-84e0-94fc420e7b87/bridge/2.log" Dec 04 09:41:15 crc kubenswrapper[4841]: I1204 09:41:15.085809 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-989d4bbd6-2j6mk_8175b165-39c3-489d-84e0-94fc420e7b87/sg-core/0.log" Dec 04 09:41:15 crc kubenswrapper[4841]: I1204 09:41:15.437822 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk_d13cd4c4-469b-444a-b482-4bcb88d1721e/bridge/2.log" Dec 04 09:41:15 crc kubenswrapper[4841]: I1204 09:41:15.708486 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-t5mrk_d13cd4c4-469b-444a-b482-4bcb88d1721e/sg-core/0.log" Dec 04 09:41:15 crc kubenswrapper[4841]: I1204 09:41:15.993559 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-587c778df-zj5mm_eb0cf76d-24b8-4be7-9161-58151b914f39/bridge/2.log" Dec 04 09:41:17 crc kubenswrapper[4841]: I1204 09:41:17.266959 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-587c778df-zj5mm_eb0cf76d-24b8-4be7-9161-58151b914f39/sg-core/0.log" Dec 04 09:41:17 crc kubenswrapper[4841]: I1204 09:41:17.585945 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll_29138a71-c959-4de4-8fc6-67573c77f301/bridge/2.log" Dec 04 09:41:17 crc kubenswrapper[4841]: I1204 09:41:17.897412 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-gq6ll_29138a71-c959-4de4-8fc6-67573c77f301/sg-core/0.log" Dec 04 09:41:20 crc kubenswrapper[4841]: I1204 09:41:20.497235 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:41:20 crc kubenswrapper[4841]: I1204 09:41:20.497298 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:41:21 crc kubenswrapper[4841]: I1204 09:41:21.071285 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-595bcf4c87-mp4jt_2ff86174-988f-4af5-a9cb-2a8a1b6feb5d/operator/0.log" Dec 04 09:41:21 crc kubenswrapper[4841]: I1204 09:41:21.358060 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_2f2591b2-4c37-4eb0-afa7-a3d2238b6c03/prometheus/0.log" Dec 04 09:41:21 crc kubenswrapper[4841]: I1204 09:41:21.654989 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_447eaad4-e25d-4d71-a0e2-f720640f3ba2/elasticsearch/0.log" Dec 04 09:41:21 crc kubenswrapper[4841]: I1204 09:41:21.924340 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-xfd25_52131b4e-c5ec-483e-8288-12a5fc2d9897/prometheus-webhook-snmp/0.log" Dec 04 09:41:22 crc kubenswrapper[4841]: I1204 09:41:22.191628 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_ddaade57-6ede-426a-b388-c1351af31426/alertmanager/0.log" Dec 04 09:41:35 crc kubenswrapper[4841]: I1204 09:41:35.631202 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-57f5646b69-t4vdd_d2338232-efab-4584-b317-2ccd0b36eaf2/operator/0.log" Dec 04 09:41:38 crc kubenswrapper[4841]: I1204 09:41:38.928562 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-595bcf4c87-mp4jt_2ff86174-988f-4af5-a9cb-2a8a1b6feb5d/operator/0.log" Dec 04 09:41:39 crc kubenswrapper[4841]: I1204 09:41:39.250254 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_e2fc172e-6b3d-4972-b353-0db18593824c/qdr/0.log" Dec 04 09:41:50 crc kubenswrapper[4841]: I1204 09:41:50.497561 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:41:50 crc kubenswrapper[4841]: I1204 09:41:50.498328 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:41:50 crc kubenswrapper[4841]: I1204 09:41:50.498400 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:41:50 crc kubenswrapper[4841]: I1204 09:41:50.499445 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:41:50 crc kubenswrapper[4841]: I1204 09:41:50.499552 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e" gracePeriod=600 Dec 04 09:41:51 crc kubenswrapper[4841]: I1204 09:41:51.064211 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e" exitCode=0 Dec 04 09:41:51 crc kubenswrapper[4841]: I1204 09:41:51.064294 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e"} Dec 04 09:41:51 crc kubenswrapper[4841]: I1204 09:41:51.064666 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerStarted","Data":"409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5"} Dec 04 09:41:51 crc kubenswrapper[4841]: I1204 09:41:51.064697 4841 scope.go:117] "RemoveContainer" containerID="be1ab26a9ac362b21017f4010e4bc6269da805850098c31a45b045e6974aad77" Dec 04 09:42:00 crc kubenswrapper[4841]: I1204 09:42:00.176692 4841 scope.go:117] "RemoveContainer" containerID="112207dcad702806e1873a0584aea006b9748a8d47c0be0aadab38444d30258a" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.762646 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6wxw/must-gather-bh6vg"] Dec 04 09:42:13 crc kubenswrapper[4841]: E1204 09:42:13.763506 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-ceilometer" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.763524 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-ceilometer" Dec 04 09:42:13 crc kubenswrapper[4841]: E1204 09:42:13.763538 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-collectd" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.763547 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-collectd" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.763730 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-ceilometer" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.763781 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7e005f-1051-4c6b-b135-6ebdc280f4ba" containerName="smoketest-collectd" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.764807 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.771358 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k6wxw"/"openshift-service-ca.crt" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.771389 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k6wxw"/"kube-root-ca.crt" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.829941 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6wxw/must-gather-bh6vg"] Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.867548 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.867659 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfl9\" (UniqueName: \"kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.969494 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.969596 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfl9\" (UniqueName: \"kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.969972 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:13 crc kubenswrapper[4841]: I1204 09:42:13.987028 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfl9\" (UniqueName: \"kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9\") pod \"must-gather-bh6vg\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:14 crc kubenswrapper[4841]: I1204 09:42:14.084632 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:42:14 crc kubenswrapper[4841]: I1204 09:42:14.514737 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6wxw/must-gather-bh6vg"] Dec 04 09:42:14 crc kubenswrapper[4841]: I1204 09:42:14.522424 4841 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 09:42:15 crc kubenswrapper[4841]: I1204 09:42:15.267116 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" event={"ID":"7093ee10-c868-4939-9e21-6dd7541f0c5b","Type":"ContainerStarted","Data":"848353104c6ff357596ec6316f2545b763fcafa84abe01ef1313a66fa72fc2f3"} Dec 04 09:42:27 crc kubenswrapper[4841]: I1204 09:42:27.359357 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" event={"ID":"7093ee10-c868-4939-9e21-6dd7541f0c5b","Type":"ContainerStarted","Data":"a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b"} Dec 04 09:42:27 crc kubenswrapper[4841]: I1204 09:42:27.359946 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" event={"ID":"7093ee10-c868-4939-9e21-6dd7541f0c5b","Type":"ContainerStarted","Data":"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575"} Dec 04 09:42:27 crc kubenswrapper[4841]: I1204 09:42:27.373412 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" podStartSLOduration=2.539685888 podStartE2EDuration="14.373393481s" podCreationTimestamp="2025-12-04 09:42:13 +0000 UTC" firstStartedPulling="2025-12-04 09:42:14.522378952 +0000 UTC m=+1401.274169156" lastFinishedPulling="2025-12-04 09:42:26.356086555 +0000 UTC m=+1413.107876749" observedRunningTime="2025-12-04 09:42:27.371715289 +0000 UTC m=+1414.123505503" watchObservedRunningTime="2025-12-04 09:42:27.373393481 +0000 UTC m=+1414.125183695" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.622845 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.625405 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.636720 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.811378 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.811445 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74f5\" (UniqueName: \"kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.811596 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.913071 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.913146 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.913190 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74f5\" (UniqueName: \"kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.913584 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.913595 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.933895 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74f5\" (UniqueName: \"kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5\") pod \"certified-operators-7lz62\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:06 crc kubenswrapper[4841]: I1204 09:43:06.944173 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:07 crc kubenswrapper[4841]: I1204 09:43:07.219724 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:07 crc kubenswrapper[4841]: I1204 09:43:07.683500 4841 generic.go:334] "Generic (PLEG): container finished" podID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerID="53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2" exitCode=0 Dec 04 09:43:07 crc kubenswrapper[4841]: I1204 09:43:07.683543 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerDied","Data":"53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2"} Dec 04 09:43:07 crc kubenswrapper[4841]: I1204 09:43:07.683845 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerStarted","Data":"b48df758338fe8a2bcc42bcd443c4795f8218a2e771f5cbc80a3cc7660b60a2d"} Dec 04 09:43:08 crc kubenswrapper[4841]: I1204 09:43:08.697160 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerStarted","Data":"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77"} Dec 04 09:43:09 crc kubenswrapper[4841]: I1204 09:43:09.706057 4841 generic.go:334] "Generic (PLEG): container finished" podID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerID="90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77" exitCode=0 Dec 04 09:43:09 crc kubenswrapper[4841]: I1204 09:43:09.706100 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerDied","Data":"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77"} Dec 04 09:43:10 crc kubenswrapper[4841]: I1204 09:43:10.725652 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerStarted","Data":"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310"} Dec 04 09:43:10 crc kubenswrapper[4841]: I1204 09:43:10.760069 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lz62" podStartSLOduration=2.319442154 podStartE2EDuration="4.760046019s" podCreationTimestamp="2025-12-04 09:43:06 +0000 UTC" firstStartedPulling="2025-12-04 09:43:07.685684384 +0000 UTC m=+1454.437474588" lastFinishedPulling="2025-12-04 09:43:10.126288239 +0000 UTC m=+1456.878078453" observedRunningTime="2025-12-04 09:43:10.7513508 +0000 UTC m=+1457.503141044" watchObservedRunningTime="2025-12-04 09:43:10.760046019 +0000 UTC m=+1457.511836253" Dec 04 09:43:14 crc kubenswrapper[4841]: I1204 09:43:14.690132 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6zltn_c201ee2d-0b9b-4737-b0d9-091ccd258e1e/control-plane-machine-set-operator/0.log" Dec 04 09:43:14 crc kubenswrapper[4841]: I1204 09:43:14.784702 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-42c5d_6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b/kube-rbac-proxy/0.log" Dec 04 09:43:14 crc kubenswrapper[4841]: I1204 09:43:14.855620 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-42c5d_6ed1c595-af7f-4f3f-bcb2-7da2461d1d4b/machine-api-operator/0.log" Dec 04 09:43:16 crc kubenswrapper[4841]: I1204 09:43:16.944851 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:16 crc kubenswrapper[4841]: I1204 09:43:16.945199 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:17 crc kubenswrapper[4841]: I1204 09:43:17.017489 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:17 crc kubenswrapper[4841]: I1204 09:43:17.816435 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:17 crc kubenswrapper[4841]: I1204 09:43:17.861225 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:19 crc kubenswrapper[4841]: I1204 09:43:19.794636 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lz62" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="registry-server" containerID="cri-o://46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310" gracePeriod=2 Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.465967 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.657866 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content\") pod \"2fabb9fe-1d18-49f7-a275-2165659e61c8\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.657966 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities\") pod \"2fabb9fe-1d18-49f7-a275-2165659e61c8\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.658128 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w74f5\" (UniqueName: \"kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5\") pod \"2fabb9fe-1d18-49f7-a275-2165659e61c8\" (UID: \"2fabb9fe-1d18-49f7-a275-2165659e61c8\") " Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.658993 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities" (OuterVolumeSpecName: "utilities") pod "2fabb9fe-1d18-49f7-a275-2165659e61c8" (UID: "2fabb9fe-1d18-49f7-a275-2165659e61c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.663602 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5" (OuterVolumeSpecName: "kube-api-access-w74f5") pod "2fabb9fe-1d18-49f7-a275-2165659e61c8" (UID: "2fabb9fe-1d18-49f7-a275-2165659e61c8"). InnerVolumeSpecName "kube-api-access-w74f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.723850 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fabb9fe-1d18-49f7-a275-2165659e61c8" (UID: "2fabb9fe-1d18-49f7-a275-2165659e61c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.760239 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w74f5\" (UniqueName: \"kubernetes.io/projected/2fabb9fe-1d18-49f7-a275-2165659e61c8-kube-api-access-w74f5\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.760308 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.760328 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fabb9fe-1d18-49f7-a275-2165659e61c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.812939 4841 generic.go:334] "Generic (PLEG): container finished" podID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerID="46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310" exitCode=0 Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.813008 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerDied","Data":"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310"} Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.813045 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lz62" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.813069 4841 scope.go:117] "RemoveContainer" containerID="46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.813051 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lz62" event={"ID":"2fabb9fe-1d18-49f7-a275-2165659e61c8","Type":"ContainerDied","Data":"b48df758338fe8a2bcc42bcd443c4795f8218a2e771f5cbc80a3cc7660b60a2d"} Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.844080 4841 scope.go:117] "RemoveContainer" containerID="90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.869756 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.870137 4841 scope.go:117] "RemoveContainer" containerID="53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.877119 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lz62"] Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.903567 4841 scope.go:117] "RemoveContainer" containerID="46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310" Dec 04 09:43:21 crc kubenswrapper[4841]: E1204 09:43:21.904161 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310\": container with ID starting with 46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310 not found: ID does not exist" containerID="46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.904207 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310"} err="failed to get container status \"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310\": rpc error: code = NotFound desc = could not find container \"46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310\": container with ID starting with 46a3a9fa2ff4b26c76d1679680330090ed8b0721a3ab2c3873bab37d549f8310 not found: ID does not exist" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.904244 4841 scope.go:117] "RemoveContainer" containerID="90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77" Dec 04 09:43:21 crc kubenswrapper[4841]: E1204 09:43:21.904589 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77\": container with ID starting with 90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77 not found: ID does not exist" containerID="90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.904625 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77"} err="failed to get container status \"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77\": rpc error: code = NotFound desc = could not find container \"90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77\": container with ID starting with 90f82edeb4699941bcb6846771c909aeb6082fd2a09ce43a8b3235e6a1d8ee77 not found: ID does not exist" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.904644 4841 scope.go:117] "RemoveContainer" containerID="53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2" Dec 04 09:43:21 crc kubenswrapper[4841]: E1204 09:43:21.905083 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2\": container with ID starting with 53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2 not found: ID does not exist" containerID="53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2" Dec 04 09:43:21 crc kubenswrapper[4841]: I1204 09:43:21.905130 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2"} err="failed to get container status \"53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2\": rpc error: code = NotFound desc = could not find container \"53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2\": container with ID starting with 53a12874d67a12b44dda5eadf840aba5eafe831f5183b7d121da43b7e8cd22e2 not found: ID does not exist" Dec 04 09:43:23 crc kubenswrapper[4841]: I1204 09:43:23.631516 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" path="/var/lib/kubelet/pods/2fabb9fe-1d18-49f7-a275-2165659e61c8/volumes" Dec 04 09:43:28 crc kubenswrapper[4841]: I1204 09:43:28.196283 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-5hrcv_c3fa4e21-9119-47f0-a900-d0f08fd0fecf/cert-manager-controller/0.log" Dec 04 09:43:28 crc kubenswrapper[4841]: I1204 09:43:28.347425 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-nz48b_f9ab64cd-10d3-4143-90da-d4fe6e5525ab/cert-manager-cainjector/0.log" Dec 04 09:43:28 crc kubenswrapper[4841]: I1204 09:43:28.380713 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-lbdpk_0e2a0ffb-dd99-43e4-a1f4-cbd5c86f18ae/cert-manager-webhook/0.log" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.336451 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:38 crc kubenswrapper[4841]: E1204 09:43:38.337226 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="extract-content" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.337239 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="extract-content" Dec 04 09:43:38 crc kubenswrapper[4841]: E1204 09:43:38.337247 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="registry-server" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.337253 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="registry-server" Dec 04 09:43:38 crc kubenswrapper[4841]: E1204 09:43:38.337267 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="extract-utilities" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.337273 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="extract-utilities" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.337394 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fabb9fe-1d18-49f7-a275-2165659e61c8" containerName="registry-server" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.338287 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.341094 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xl9d\" (UniqueName: \"kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.341264 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.341327 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.351305 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.442529 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.442887 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.442956 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xl9d\" (UniqueName: \"kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.443108 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.444317 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.470050 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xl9d\" (UniqueName: \"kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d\") pod \"redhat-operators-xslcr\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.715870 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.962182 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:38 crc kubenswrapper[4841]: W1204 09:43:38.969304 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44188e40_aa67_4162_a0ea_be5a41abd5ed.slice/crio-0fca2333478ae63b56c12ef557463cae301bbcb8b17ba8548050c118cdfbd9d2 WatchSource:0}: Error finding container 0fca2333478ae63b56c12ef557463cae301bbcb8b17ba8548050c118cdfbd9d2: Status 404 returned error can't find the container with id 0fca2333478ae63b56c12ef557463cae301bbcb8b17ba8548050c118cdfbd9d2 Dec 04 09:43:38 crc kubenswrapper[4841]: I1204 09:43:38.978067 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerStarted","Data":"0fca2333478ae63b56c12ef557463cae301bbcb8b17ba8548050c118cdfbd9d2"} Dec 04 09:43:39 crc kubenswrapper[4841]: I1204 09:43:39.989127 4841 generic.go:334] "Generic (PLEG): container finished" podID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerID="16cd722b554f83d2b5343eae9f1e378c7c991f8682d1131ac0120842cb1bc0e5" exitCode=0 Dec 04 09:43:39 crc kubenswrapper[4841]: I1204 09:43:39.989185 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerDied","Data":"16cd722b554f83d2b5343eae9f1e378c7c991f8682d1131ac0120842cb1bc0e5"} Dec 04 09:43:40 crc kubenswrapper[4841]: I1204 09:43:40.998779 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerStarted","Data":"f9d3deb07711bd1456f8e4095a120a2d90057460eb451324ca268bc54c746f18"} Dec 04 09:43:42 crc kubenswrapper[4841]: I1204 09:43:42.009799 4841 generic.go:334] "Generic (PLEG): container finished" podID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerID="f9d3deb07711bd1456f8e4095a120a2d90057460eb451324ca268bc54c746f18" exitCode=0 Dec 04 09:43:42 crc kubenswrapper[4841]: I1204 09:43:42.009924 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerDied","Data":"f9d3deb07711bd1456f8e4095a120a2d90057460eb451324ca268bc54c746f18"} Dec 04 09:43:43 crc kubenswrapper[4841]: I1204 09:43:43.018370 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerStarted","Data":"cfcce342932190d1397c1e962d502a93d21a36c12e83b744c2c6c96356b7ba91"} Dec 04 09:43:43 crc kubenswrapper[4841]: I1204 09:43:43.038434 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xslcr" podStartSLOduration=2.508554459 podStartE2EDuration="5.038418652s" podCreationTimestamp="2025-12-04 09:43:38 +0000 UTC" firstStartedPulling="2025-12-04 09:43:39.991509709 +0000 UTC m=+1486.743299913" lastFinishedPulling="2025-12-04 09:43:42.521373902 +0000 UTC m=+1489.273164106" observedRunningTime="2025-12-04 09:43:43.034402195 +0000 UTC m=+1489.786192399" watchObservedRunningTime="2025-12-04 09:43:43.038418652 +0000 UTC m=+1489.790208856" Dec 04 09:43:43 crc kubenswrapper[4841]: I1204 09:43:43.851320 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.047313 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.048633 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/pull/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.121951 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/pull/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.213547 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.252463 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/pull/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.273598 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ak8sq6_f85e6ab8-4e46-4b30-b425-1f812e4faabc/extract/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.379590 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.542487 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.552147 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/pull/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.584477 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/pull/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.805521 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/util/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.818487 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/extract/0.log" Dec 04 09:43:44 crc kubenswrapper[4841]: I1204 09:43:44.829985 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210h7cx4_16c8917f-1adc-4ed5-bc5d-465d125693a9/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.001369 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.172664 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.175256 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.248651 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.361805 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.389802 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/extract/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.390962 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f956q7_e5b35625-5dbb-4f76-960d-04ac80bd487c/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.680643 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.746120 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.751869 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.831138 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/pull/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.939563 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/util/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.971992 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/extract/0.log" Dec 04 09:43:45 crc kubenswrapper[4841]: I1204 09:43:45.979090 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edrv8l_0c9547ae-2030-4a77-a58d-e3a54a430e4f/pull/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.151689 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-utilities/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.267966 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-content/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.300576 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-content/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.319979 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-utilities/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.529398 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-content/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.597807 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/extract-utilities/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.799670 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dvmfx_29e9cd14-c26a-46e1-a360-a581ae897e94/registry-server/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.817747 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-utilities/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.973349 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-content/0.log" Dec 04 09:43:46 crc kubenswrapper[4841]: I1204 09:43:46.978958 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-utilities/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.017315 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-content/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.160446 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-utilities/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.193002 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/extract-content/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.292829 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6kk8z_d3605f15-1f3c-4177-9401-1cb41d6d417b/marketplace-operator/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.413029 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hb7jm_d2c87b0b-8ee5-4c87-90da-1dbba059e5aa/registry-server/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.472665 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-utilities/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.610289 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-content/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.614880 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-content/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.658641 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-utilities/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.772854 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-utilities/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.811582 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/extract-content/0.log" Dec 04 09:43:47 crc kubenswrapper[4841]: I1204 09:43:47.826792 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-utilities/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.024113 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-utilities/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.057185 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlzl_f8e3c550-3036-46e6-9851-3f09bdcfa65f/registry-server/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.103298 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-content/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.103339 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-content/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.246444 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-utilities/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.251501 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/extract-content/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.254714 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xslcr_44188e40-aa67-4162-a0ea-be5a41abd5ed/registry-server/0.log" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.717168 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.717246 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:48 crc kubenswrapper[4841]: I1204 09:43:48.821032 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:49 crc kubenswrapper[4841]: I1204 09:43:49.115504 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:49 crc kubenswrapper[4841]: I1204 09:43:49.167317 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:50 crc kubenswrapper[4841]: I1204 09:43:50.497525 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:43:50 crc kubenswrapper[4841]: I1204 09:43:50.497621 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:43:51 crc kubenswrapper[4841]: I1204 09:43:51.083316 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xslcr" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="registry-server" containerID="cri-o://cfcce342932190d1397c1e962d502a93d21a36c12e83b744c2c6c96356b7ba91" gracePeriod=2 Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.120278 4841 generic.go:334] "Generic (PLEG): container finished" podID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerID="cfcce342932190d1397c1e962d502a93d21a36c12e83b744c2c6c96356b7ba91" exitCode=0 Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.120711 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerDied","Data":"cfcce342932190d1397c1e962d502a93d21a36c12e83b744c2c6c96356b7ba91"} Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.244786 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.394079 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xl9d\" (UniqueName: \"kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d\") pod \"44188e40-aa67-4162-a0ea-be5a41abd5ed\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.394344 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities\") pod \"44188e40-aa67-4162-a0ea-be5a41abd5ed\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.394397 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content\") pod \"44188e40-aa67-4162-a0ea-be5a41abd5ed\" (UID: \"44188e40-aa67-4162-a0ea-be5a41abd5ed\") " Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.395273 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities" (OuterVolumeSpecName: "utilities") pod "44188e40-aa67-4162-a0ea-be5a41abd5ed" (UID: "44188e40-aa67-4162-a0ea-be5a41abd5ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.400127 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d" (OuterVolumeSpecName: "kube-api-access-4xl9d") pod "44188e40-aa67-4162-a0ea-be5a41abd5ed" (UID: "44188e40-aa67-4162-a0ea-be5a41abd5ed"). InnerVolumeSpecName "kube-api-access-4xl9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.495591 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.495620 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xl9d\" (UniqueName: \"kubernetes.io/projected/44188e40-aa67-4162-a0ea-be5a41abd5ed-kube-api-access-4xl9d\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.536154 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44188e40-aa67-4162-a0ea-be5a41abd5ed" (UID: "44188e40-aa67-4162-a0ea-be5a41abd5ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:43:55 crc kubenswrapper[4841]: I1204 09:43:55.596582 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44188e40-aa67-4162-a0ea-be5a41abd5ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.131267 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xslcr" event={"ID":"44188e40-aa67-4162-a0ea-be5a41abd5ed","Type":"ContainerDied","Data":"0fca2333478ae63b56c12ef557463cae301bbcb8b17ba8548050c118cdfbd9d2"} Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.131335 4841 scope.go:117] "RemoveContainer" containerID="cfcce342932190d1397c1e962d502a93d21a36c12e83b744c2c6c96356b7ba91" Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.131347 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xslcr" Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.169044 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.186069 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xslcr"] Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.192944 4841 scope.go:117] "RemoveContainer" containerID="f9d3deb07711bd1456f8e4095a120a2d90057460eb451324ca268bc54c746f18" Dec 04 09:43:56 crc kubenswrapper[4841]: I1204 09:43:56.225008 4841 scope.go:117] "RemoveContainer" containerID="16cd722b554f83d2b5343eae9f1e378c7c991f8682d1131ac0120842cb1bc0e5" Dec 04 09:43:57 crc kubenswrapper[4841]: I1204 09:43:57.632569 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" path="/var/lib/kubelet/pods/44188e40-aa67-4162-a0ea-be5a41abd5ed/volumes" Dec 04 09:44:02 crc kubenswrapper[4841]: I1204 09:44:02.453574 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-9w7br_5c329c80-8a48-46ed-951c-3eea7069ea2d/prometheus-operator/0.log" Dec 04 09:44:02 crc kubenswrapper[4841]: I1204 09:44:02.609037 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857ff5dbf7-qql5w_328f71ee-61d3-45fa-b31b-920b60b829da/prometheus-operator-admission-webhook/0.log" Dec 04 09:44:02 crc kubenswrapper[4841]: I1204 09:44:02.654468 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857ff5dbf7-wd6p9_41b62915-03e0-4a31-ad0e-1ae076c46c5d/prometheus-operator-admission-webhook/0.log" Dec 04 09:44:02 crc kubenswrapper[4841]: I1204 09:44:02.779875 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6fwx2_17fa5a89-d2a0-4e18-a108-3d65165edf2c/operator/0.log" Dec 04 09:44:02 crc kubenswrapper[4841]: I1204 09:44:02.885403 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-5hpbz_503a4298-2b62-4e18-a4be-637dc8b9ffeb/perses-operator/0.log" Dec 04 09:44:20 crc kubenswrapper[4841]: I1204 09:44:20.497902 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:44:20 crc kubenswrapper[4841]: I1204 09:44:20.498522 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.243215 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:34 crc kubenswrapper[4841]: E1204 09:44:34.244236 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="extract-utilities" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.244261 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="extract-utilities" Dec 04 09:44:34 crc kubenswrapper[4841]: E1204 09:44:34.244293 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="registry-server" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.244306 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="registry-server" Dec 04 09:44:34 crc kubenswrapper[4841]: E1204 09:44:34.244329 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="extract-content" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.244343 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="extract-content" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.244542 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="44188e40-aa67-4162-a0ea-be5a41abd5ed" containerName="registry-server" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.245187 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.263520 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.345643 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfcm6\" (UniqueName: \"kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6\") pod \"infrawatch-operators-5xbnj\" (UID: \"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4\") " pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.446626 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfcm6\" (UniqueName: \"kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6\") pod \"infrawatch-operators-5xbnj\" (UID: \"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4\") " pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.483936 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfcm6\" (UniqueName: \"kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6\") pod \"infrawatch-operators-5xbnj\" (UID: \"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4\") " pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.576708 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:34 crc kubenswrapper[4841]: I1204 09:44:34.876756 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:34 crc kubenswrapper[4841]: W1204 09:44:34.884609 4841 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3a2f8f_4f60_4fb4_a50a_5f66aa07c8e4.slice/crio-d767617bfe4dcb8c1740873f14c6f2f9dcdaf35c1aa24cf05c6458661e532468 WatchSource:0}: Error finding container d767617bfe4dcb8c1740873f14c6f2f9dcdaf35c1aa24cf05c6458661e532468: Status 404 returned error can't find the container with id d767617bfe4dcb8c1740873f14c6f2f9dcdaf35c1aa24cf05c6458661e532468 Dec 04 09:44:35 crc kubenswrapper[4841]: I1204 09:44:35.461612 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-5xbnj" event={"ID":"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4","Type":"ContainerStarted","Data":"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c"} Dec 04 09:44:35 crc kubenswrapper[4841]: I1204 09:44:35.461977 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-5xbnj" event={"ID":"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4","Type":"ContainerStarted","Data":"d767617bfe4dcb8c1740873f14c6f2f9dcdaf35c1aa24cf05c6458661e532468"} Dec 04 09:44:35 crc kubenswrapper[4841]: I1204 09:44:35.483510 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-5xbnj" podStartSLOduration=1.35848292 podStartE2EDuration="1.483493244s" podCreationTimestamp="2025-12-04 09:44:34 +0000 UTC" firstStartedPulling="2025-12-04 09:44:34.88708663 +0000 UTC m=+1541.638876874" lastFinishedPulling="2025-12-04 09:44:35.012096984 +0000 UTC m=+1541.763887198" observedRunningTime="2025-12-04 09:44:35.475991193 +0000 UTC m=+1542.227781407" watchObservedRunningTime="2025-12-04 09:44:35.483493244 +0000 UTC m=+1542.235283458" Dec 04 09:44:44 crc kubenswrapper[4841]: I1204 09:44:44.577590 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:44 crc kubenswrapper[4841]: I1204 09:44:44.578376 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:44 crc kubenswrapper[4841]: I1204 09:44:44.629947 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:45 crc kubenswrapper[4841]: I1204 09:44:45.582735 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:45 crc kubenswrapper[4841]: I1204 09:44:45.641078 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:47 crc kubenswrapper[4841]: I1204 09:44:47.577992 4841 generic.go:334] "Generic (PLEG): container finished" podID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerID="0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575" exitCode=0 Dec 04 09:44:47 crc kubenswrapper[4841]: I1204 09:44:47.578143 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" event={"ID":"7093ee10-c868-4939-9e21-6dd7541f0c5b","Type":"ContainerDied","Data":"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575"} Dec 04 09:44:47 crc kubenswrapper[4841]: I1204 09:44:47.578756 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-5xbnj" podUID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" containerName="registry-server" containerID="cri-o://c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c" gracePeriod=2 Dec 04 09:44:47 crc kubenswrapper[4841]: I1204 09:44:47.579377 4841 scope.go:117] "RemoveContainer" containerID="0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.010806 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.154723 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfcm6\" (UniqueName: \"kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6\") pod \"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4\" (UID: \"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4\") " Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.164172 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6" (OuterVolumeSpecName: "kube-api-access-hfcm6") pod "1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" (UID: "1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4"). InnerVolumeSpecName "kube-api-access-hfcm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.256274 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfcm6\" (UniqueName: \"kubernetes.io/projected/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4-kube-api-access-hfcm6\") on node \"crc\" DevicePath \"\"" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.427522 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k6wxw_must-gather-bh6vg_7093ee10-c868-4939-9e21-6dd7541f0c5b/gather/0.log" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.593017 4841 generic.go:334] "Generic (PLEG): container finished" podID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" containerID="c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c" exitCode=0 Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.593097 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-5xbnj" event={"ID":"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4","Type":"ContainerDied","Data":"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c"} Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.593125 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-5xbnj" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.593157 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-5xbnj" event={"ID":"1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4","Type":"ContainerDied","Data":"d767617bfe4dcb8c1740873f14c6f2f9dcdaf35c1aa24cf05c6458661e532468"} Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.593186 4841 scope.go:117] "RemoveContainer" containerID="c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.647011 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.647883 4841 scope.go:117] "RemoveContainer" containerID="c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c" Dec 04 09:44:48 crc kubenswrapper[4841]: E1204 09:44:48.650610 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c\": container with ID starting with c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c not found: ID does not exist" containerID="c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.650692 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c"} err="failed to get container status \"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c\": rpc error: code = NotFound desc = could not find container \"c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c\": container with ID starting with c3ec908c90482145c4c633917b5d7c97853cd901bd8544285410e1cd988d415c not found: ID does not exist" Dec 04 09:44:48 crc kubenswrapper[4841]: I1204 09:44:48.659791 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-5xbnj"] Dec 04 09:44:49 crc kubenswrapper[4841]: I1204 09:44:49.631041 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" path="/var/lib/kubelet/pods/1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4/volumes" Dec 04 09:44:50 crc kubenswrapper[4841]: I1204 09:44:50.498298 4841 patch_prober.go:28] interesting pod/machine-config-daemon-rxw4w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 09:44:50 crc kubenswrapper[4841]: I1204 09:44:50.498432 4841 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 09:44:50 crc kubenswrapper[4841]: I1204 09:44:50.498496 4841 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" Dec 04 09:44:50 crc kubenswrapper[4841]: I1204 09:44:50.499282 4841 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5"} pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 09:44:50 crc kubenswrapper[4841]: I1204 09:44:50.499394 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerName="machine-config-daemon" containerID="cri-o://409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" gracePeriod=600 Dec 04 09:44:50 crc kubenswrapper[4841]: E1204 09:44:50.641881 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:44:51 crc kubenswrapper[4841]: I1204 09:44:51.627165 4841 generic.go:334] "Generic (PLEG): container finished" podID="5bdd240e-976c-408f-9ace-3cd860da98e4" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" exitCode=0 Dec 04 09:44:51 crc kubenswrapper[4841]: I1204 09:44:51.627265 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" event={"ID":"5bdd240e-976c-408f-9ace-3cd860da98e4","Type":"ContainerDied","Data":"409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5"} Dec 04 09:44:51 crc kubenswrapper[4841]: I1204 09:44:51.627561 4841 scope.go:117] "RemoveContainer" containerID="dbe8eeee837ded207479fbd61ebaa419fe865d6e5d03d556bce405e52905df7e" Dec 04 09:44:51 crc kubenswrapper[4841]: I1204 09:44:51.628242 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:44:51 crc kubenswrapper[4841]: E1204 09:44:51.628624 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.095523 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k6wxw/must-gather-bh6vg"] Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.096140 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="copy" containerID="cri-o://a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b" gracePeriod=2 Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.101786 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k6wxw/must-gather-bh6vg"] Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.490750 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k6wxw_must-gather-bh6vg_7093ee10-c868-4939-9e21-6dd7541f0c5b/copy/0.log" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.493460 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.572957 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output\") pod \"7093ee10-c868-4939-9e21-6dd7541f0c5b\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.573019 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxfl9\" (UniqueName: \"kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9\") pod \"7093ee10-c868-4939-9e21-6dd7541f0c5b\" (UID: \"7093ee10-c868-4939-9e21-6dd7541f0c5b\") " Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.590575 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9" (OuterVolumeSpecName: "kube-api-access-dxfl9") pod "7093ee10-c868-4939-9e21-6dd7541f0c5b" (UID: "7093ee10-c868-4939-9e21-6dd7541f0c5b"). InnerVolumeSpecName "kube-api-access-dxfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.639068 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7093ee10-c868-4939-9e21-6dd7541f0c5b" (UID: "7093ee10-c868-4939-9e21-6dd7541f0c5b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.665849 4841 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k6wxw_must-gather-bh6vg_7093ee10-c868-4939-9e21-6dd7541f0c5b/copy/0.log" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.666556 4841 generic.go:334] "Generic (PLEG): container finished" podID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerID="a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b" exitCode=143 Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.666613 4841 scope.go:117] "RemoveContainer" containerID="a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.666728 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6wxw/must-gather-bh6vg" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.674652 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxfl9\" (UniqueName: \"kubernetes.io/projected/7093ee10-c868-4939-9e21-6dd7541f0c5b-kube-api-access-dxfl9\") on node \"crc\" DevicePath \"\"" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.674680 4841 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7093ee10-c868-4939-9e21-6dd7541f0c5b-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.705983 4841 scope.go:117] "RemoveContainer" containerID="0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.752093 4841 scope.go:117] "RemoveContainer" containerID="a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b" Dec 04 09:44:55 crc kubenswrapper[4841]: E1204 09:44:55.752746 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b\": container with ID starting with a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b not found: ID does not exist" containerID="a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.752795 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b"} err="failed to get container status \"a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b\": rpc error: code = NotFound desc = could not find container \"a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b\": container with ID starting with a8e7add33064994c7965eb5b094f5e5afe13ad9b334420d4b1b1f771f187cb9b not found: ID does not exist" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.752825 4841 scope.go:117] "RemoveContainer" containerID="0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575" Dec 04 09:44:55 crc kubenswrapper[4841]: E1204 09:44:55.756538 4841 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575\": container with ID starting with 0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575 not found: ID does not exist" containerID="0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575" Dec 04 09:44:55 crc kubenswrapper[4841]: I1204 09:44:55.756565 4841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575"} err="failed to get container status \"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575\": rpc error: code = NotFound desc = could not find container \"0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575\": container with ID starting with 0897c6634d6d8436e78aafeb05d25f74b768d8d31beacf4f965ca928f57f3575 not found: ID does not exist" Dec 04 09:44:57 crc kubenswrapper[4841]: I1204 09:44:57.624853 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" path="/var/lib/kubelet/pods/7093ee10-c868-4939-9e21-6dd7541f0c5b/volumes" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.139666 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b"] Dec 04 09:45:00 crc kubenswrapper[4841]: E1204 09:45:00.140302 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="copy" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140317 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="copy" Dec 04 09:45:00 crc kubenswrapper[4841]: E1204 09:45:00.140326 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="gather" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140333 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="gather" Dec 04 09:45:00 crc kubenswrapper[4841]: E1204 09:45:00.140354 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" containerName="registry-server" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140360 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" containerName="registry-server" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140491 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3a2f8f-4f60-4fb4-a50a-5f66aa07c8e4" containerName="registry-server" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140507 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="copy" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.140520 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="7093ee10-c868-4939-9e21-6dd7541f0c5b" containerName="gather" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.141047 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.143070 4841 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.143534 4841 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.155508 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b"] Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.242164 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7smz\" (UniqueName: \"kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.242253 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.242383 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.343547 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.343713 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.343787 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7smz\" (UniqueName: \"kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.344969 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.351924 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.368562 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7smz\" (UniqueName: \"kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz\") pod \"collect-profiles-29414025-ll75b\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.457559 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:00 crc kubenswrapper[4841]: I1204 09:45:00.922907 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b"] Dec 04 09:45:01 crc kubenswrapper[4841]: I1204 09:45:01.709440 4841 generic.go:334] "Generic (PLEG): container finished" podID="ba441abe-b5fa-4906-9421-210beb115149" containerID="e849ab9256db3b6c29b498c395f2f663c0a7119f1c41d4c4c05fcf6b3ca4a36c" exitCode=0 Dec 04 09:45:01 crc kubenswrapper[4841]: I1204 09:45:01.709505 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" event={"ID":"ba441abe-b5fa-4906-9421-210beb115149","Type":"ContainerDied","Data":"e849ab9256db3b6c29b498c395f2f663c0a7119f1c41d4c4c05fcf6b3ca4a36c"} Dec 04 09:45:01 crc kubenswrapper[4841]: I1204 09:45:01.709735 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" event={"ID":"ba441abe-b5fa-4906-9421-210beb115149","Type":"ContainerStarted","Data":"627929f3e7a4604fa07bd60f214742214f8978f9a60ce50a966091e9140a2ce7"} Dec 04 09:45:02 crc kubenswrapper[4841]: I1204 09:45:02.617749 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:45:02 crc kubenswrapper[4841]: E1204 09:45:02.618503 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.013050 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.085807 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume\") pod \"ba441abe-b5fa-4906-9421-210beb115149\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.085932 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume\") pod \"ba441abe-b5fa-4906-9421-210beb115149\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.085975 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7smz\" (UniqueName: \"kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz\") pod \"ba441abe-b5fa-4906-9421-210beb115149\" (UID: \"ba441abe-b5fa-4906-9421-210beb115149\") " Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.086558 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume" (OuterVolumeSpecName: "config-volume") pod "ba441abe-b5fa-4906-9421-210beb115149" (UID: "ba441abe-b5fa-4906-9421-210beb115149"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.097829 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz" (OuterVolumeSpecName: "kube-api-access-x7smz") pod "ba441abe-b5fa-4906-9421-210beb115149" (UID: "ba441abe-b5fa-4906-9421-210beb115149"). InnerVolumeSpecName "kube-api-access-x7smz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.097803 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ba441abe-b5fa-4906-9421-210beb115149" (UID: "ba441abe-b5fa-4906-9421-210beb115149"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.187993 4841 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba441abe-b5fa-4906-9421-210beb115149-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.188028 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7smz\" (UniqueName: \"kubernetes.io/projected/ba441abe-b5fa-4906-9421-210beb115149-kube-api-access-x7smz\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.188040 4841 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba441abe-b5fa-4906-9421-210beb115149-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.727460 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" event={"ID":"ba441abe-b5fa-4906-9421-210beb115149","Type":"ContainerDied","Data":"627929f3e7a4604fa07bd60f214742214f8978f9a60ce50a966091e9140a2ce7"} Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.727502 4841 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627929f3e7a4604fa07bd60f214742214f8978f9a60ce50a966091e9140a2ce7" Dec 04 09:45:03 crc kubenswrapper[4841]: I1204 09:45:03.727574 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414025-ll75b" Dec 04 09:45:16 crc kubenswrapper[4841]: I1204 09:45:16.616672 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:45:16 crc kubenswrapper[4841]: E1204 09:45:16.617871 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:45:30 crc kubenswrapper[4841]: I1204 09:45:30.617040 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:45:30 crc kubenswrapper[4841]: E1204 09:45:30.617622 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:45:34 crc kubenswrapper[4841]: I1204 09:45:34.950413 4841 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:34 crc kubenswrapper[4841]: E1204 09:45:34.952825 4841 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba441abe-b5fa-4906-9421-210beb115149" containerName="collect-profiles" Dec 04 09:45:34 crc kubenswrapper[4841]: I1204 09:45:34.953038 4841 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba441abe-b5fa-4906-9421-210beb115149" containerName="collect-profiles" Dec 04 09:45:34 crc kubenswrapper[4841]: I1204 09:45:34.953399 4841 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba441abe-b5fa-4906-9421-210beb115149" containerName="collect-profiles" Dec 04 09:45:34 crc kubenswrapper[4841]: I1204 09:45:34.955134 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:34 crc kubenswrapper[4841]: I1204 09:45:34.961380 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.129481 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.129881 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.129906 4841 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbsx\" (UniqueName: \"kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.231329 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.231426 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.231451 4841 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbsx\" (UniqueName: \"kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.232073 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.232086 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.262305 4841 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbsx\" (UniqueName: \"kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx\") pod \"community-operators-tjbd5\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.326439 4841 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:35 crc kubenswrapper[4841]: I1204 09:45:35.634749 4841 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:36 crc kubenswrapper[4841]: I1204 09:45:36.024865 4841 generic.go:334] "Generic (PLEG): container finished" podID="b9a6c9dd-14d9-4741-b3fc-e372a749df84" containerID="81e19ffa768373558c6384c84080c753087644ae8019c6d59e190c5cd8e8c153" exitCode=0 Dec 04 09:45:36 crc kubenswrapper[4841]: I1204 09:45:36.025050 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerDied","Data":"81e19ffa768373558c6384c84080c753087644ae8019c6d59e190c5cd8e8c153"} Dec 04 09:45:36 crc kubenswrapper[4841]: I1204 09:45:36.025192 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerStarted","Data":"1583253ec86c726c1bf140945cc9e164b6900bb6a4f5db9e98c4434ac00c897f"} Dec 04 09:45:38 crc kubenswrapper[4841]: I1204 09:45:38.045552 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerStarted","Data":"aef7c6fa66fa865525e490aef032499a5566175b8aaccbfb23a973ebadd63a4c"} Dec 04 09:45:39 crc kubenswrapper[4841]: I1204 09:45:39.059447 4841 generic.go:334] "Generic (PLEG): container finished" podID="b9a6c9dd-14d9-4741-b3fc-e372a749df84" containerID="aef7c6fa66fa865525e490aef032499a5566175b8aaccbfb23a973ebadd63a4c" exitCode=0 Dec 04 09:45:39 crc kubenswrapper[4841]: I1204 09:45:39.059588 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerDied","Data":"aef7c6fa66fa865525e490aef032499a5566175b8aaccbfb23a973ebadd63a4c"} Dec 04 09:45:40 crc kubenswrapper[4841]: I1204 09:45:40.071582 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerStarted","Data":"e25f2f1ce0ff8f026790a65b7c87c08018601fa347d04ed9f570af7c86a6e718"} Dec 04 09:45:40 crc kubenswrapper[4841]: I1204 09:45:40.095207 4841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tjbd5" podStartSLOduration=2.553165194 podStartE2EDuration="6.095187004s" podCreationTimestamp="2025-12-04 09:45:34 +0000 UTC" firstStartedPulling="2025-12-04 09:45:36.027419539 +0000 UTC m=+1602.779209753" lastFinishedPulling="2025-12-04 09:45:39.569441359 +0000 UTC m=+1606.321231563" observedRunningTime="2025-12-04 09:45:40.093682128 +0000 UTC m=+1606.845472352" watchObservedRunningTime="2025-12-04 09:45:40.095187004 +0000 UTC m=+1606.846977238" Dec 04 09:45:43 crc kubenswrapper[4841]: I1204 09:45:43.624205 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:45:43 crc kubenswrapper[4841]: E1204 09:45:43.624909 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:45:45 crc kubenswrapper[4841]: I1204 09:45:45.327439 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:45 crc kubenswrapper[4841]: I1204 09:45:45.327839 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:45 crc kubenswrapper[4841]: I1204 09:45:45.408934 4841 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:46 crc kubenswrapper[4841]: I1204 09:45:46.211024 4841 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:46 crc kubenswrapper[4841]: I1204 09:45:46.284009 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:48 crc kubenswrapper[4841]: I1204 09:45:48.147203 4841 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tjbd5" podUID="b9a6c9dd-14d9-4741-b3fc-e372a749df84" containerName="registry-server" containerID="cri-o://e25f2f1ce0ff8f026790a65b7c87c08018601fa347d04ed9f570af7c86a6e718" gracePeriod=2 Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.166487 4841 generic.go:334] "Generic (PLEG): container finished" podID="b9a6c9dd-14d9-4741-b3fc-e372a749df84" containerID="e25f2f1ce0ff8f026790a65b7c87c08018601fa347d04ed9f570af7c86a6e718" exitCode=0 Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.166579 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerDied","Data":"e25f2f1ce0ff8f026790a65b7c87c08018601fa347d04ed9f570af7c86a6e718"} Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.434533 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.594883 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmbsx\" (UniqueName: \"kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx\") pod \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.595014 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities\") pod \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.595142 4841 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content\") pod \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\" (UID: \"b9a6c9dd-14d9-4741-b3fc-e372a749df84\") " Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.596653 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities" (OuterVolumeSpecName: "utilities") pod "b9a6c9dd-14d9-4741-b3fc-e372a749df84" (UID: "b9a6c9dd-14d9-4741-b3fc-e372a749df84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.602007 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx" (OuterVolumeSpecName: "kube-api-access-nmbsx") pod "b9a6c9dd-14d9-4741-b3fc-e372a749df84" (UID: "b9a6c9dd-14d9-4741-b3fc-e372a749df84"). InnerVolumeSpecName "kube-api-access-nmbsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.669390 4841 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9a6c9dd-14d9-4741-b3fc-e372a749df84" (UID: "b9a6c9dd-14d9-4741-b3fc-e372a749df84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.696884 4841 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.697057 4841 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a6c9dd-14d9-4741-b3fc-e372a749df84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:50 crc kubenswrapper[4841]: I1204 09:45:50.697265 4841 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmbsx\" (UniqueName: \"kubernetes.io/projected/b9a6c9dd-14d9-4741-b3fc-e372a749df84-kube-api-access-nmbsx\") on node \"crc\" DevicePath \"\"" Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.174967 4841 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjbd5" event={"ID":"b9a6c9dd-14d9-4741-b3fc-e372a749df84","Type":"ContainerDied","Data":"1583253ec86c726c1bf140945cc9e164b6900bb6a4f5db9e98c4434ac00c897f"} Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.175025 4841 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjbd5" Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.175034 4841 scope.go:117] "RemoveContainer" containerID="e25f2f1ce0ff8f026790a65b7c87c08018601fa347d04ed9f570af7c86a6e718" Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.197108 4841 scope.go:117] "RemoveContainer" containerID="aef7c6fa66fa865525e490aef032499a5566175b8aaccbfb23a973ebadd63a4c" Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.207915 4841 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.218354 4841 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tjbd5"] Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.221690 4841 scope.go:117] "RemoveContainer" containerID="81e19ffa768373558c6384c84080c753087644ae8019c6d59e190c5cd8e8c153" Dec 04 09:45:51 crc kubenswrapper[4841]: I1204 09:45:51.631959 4841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a6c9dd-14d9-4741-b3fc-e372a749df84" path="/var/lib/kubelet/pods/b9a6c9dd-14d9-4741-b3fc-e372a749df84/volumes" Dec 04 09:45:58 crc kubenswrapper[4841]: I1204 09:45:58.616633 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:45:58 crc kubenswrapper[4841]: E1204 09:45:58.617372 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:46:12 crc kubenswrapper[4841]: I1204 09:46:12.616920 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:46:12 crc kubenswrapper[4841]: E1204 09:46:12.617588 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:46:24 crc kubenswrapper[4841]: I1204 09:46:24.617196 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:46:24 crc kubenswrapper[4841]: E1204 09:46:24.618467 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:46:39 crc kubenswrapper[4841]: I1204 09:46:39.616844 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:46:39 crc kubenswrapper[4841]: E1204 09:46:39.618173 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:46:50 crc kubenswrapper[4841]: I1204 09:46:50.617580 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:46:50 crc kubenswrapper[4841]: E1204 09:46:50.618740 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:47:05 crc kubenswrapper[4841]: I1204 09:47:05.616445 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:47:05 crc kubenswrapper[4841]: E1204 09:47:05.617385 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:47:17 crc kubenswrapper[4841]: I1204 09:47:17.617653 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:47:17 crc kubenswrapper[4841]: E1204 09:47:17.618927 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:47:28 crc kubenswrapper[4841]: I1204 09:47:28.617265 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:47:28 crc kubenswrapper[4841]: E1204 09:47:28.618490 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:47:41 crc kubenswrapper[4841]: I1204 09:47:41.617138 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:47:41 crc kubenswrapper[4841]: E1204 09:47:41.618297 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:47:52 crc kubenswrapper[4841]: I1204 09:47:52.616407 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:47:52 crc kubenswrapper[4841]: E1204 09:47:52.617166 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:48:04 crc kubenswrapper[4841]: I1204 09:48:04.616644 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:48:04 crc kubenswrapper[4841]: E1204 09:48:04.617298 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:48:17 crc kubenswrapper[4841]: I1204 09:48:17.617241 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:48:17 crc kubenswrapper[4841]: E1204 09:48:17.617734 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4" Dec 04 09:48:28 crc kubenswrapper[4841]: I1204 09:48:28.617347 4841 scope.go:117] "RemoveContainer" containerID="409a9b709cc465504b1f80e05dd97d1d653ee88f52307daa20b5658884e3cca5" Dec 04 09:48:28 crc kubenswrapper[4841]: E1204 09:48:28.618536 4841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rxw4w_openshift-machine-config-operator(5bdd240e-976c-408f-9ace-3cd860da98e4)\"" pod="openshift-machine-config-operator/machine-config-daemon-rxw4w" podUID="5bdd240e-976c-408f-9ace-3cd860da98e4"